Next: Introduction
Up: Design and Implementation of
Previous: Design and Implementation of
We have previously shown that the patterns in which files are
accessed offer information that can accurately predict upcoming file
accesses. Most modern caches ignore these patterns, thereby failing
to use information that enables significant reductions in I/O latency.
While prefetching heuristics that expect sequential accesses are often
effective methods to reduce I/O latency, they cannot be applied across
files, because the abstraction of a file has no intrinsic concept of a
successor. This limits the ability of modern file systems to
prefetch. Here we presents our implementation of a predictive
prefetching system, that makes use of file access patterns to reduce
I/O latency.
Previously we developed a technique called Partitioned Context
Modeling (PCM) [13] that efficiently models file accesses to
reliably predict upcoming requests. We present our experiences in
implementing predictive prefetching based on file access patterns.
From the lessons learned we developed of a new technique Extended
Partitioned Context Modeling (EPCM), which has even better
performance.
We have modified the Linux kernel to prefetch file data based on Partitioned Context Modeling and Extended Partitioned Context
Modeling. With this implementation we examine how a prefetching
policy, that uses such models to predict upcoming accesses, can result
in large reductions in I/O latencies. We tested our implementation
with four different application-based benchmarks and saw I/O latency
reduced by 31% to 90% and elapsed time reduced by 11% to 16%.
Next: Introduction
Up: Design and Implementation of
Previous: Design and Implementation of
Tom M. Kroeger
2001-05-01