Next: AMP
Up: Sequential Prefetching
Previous: A Classification of Prefetching
Since demand-paged data, prefetched data, and sometimes modified data,
share the same cache in most data server architectures, we normally would
require a way to divide the cache between the various types,
and manage each portion optimally, so as to maximize the overall
performance of the system.
While a large number of demand-paging cache replacement algorithms have been
devised (for example, LRU, CLOCK, FBR, 2Q, LRFU, LIRS, MQ, and ARC),
surprisingly, and to the best of our knowledge, there has been no research
towards an online optimal cache replacement policy for prefetched data.
In this paper, we provide this missing link and present a provably optimal
algorithm for multiple sequential streams sharing a cache and a
very simple practical implementation thereof. We believe that much of the work
on understanding the interactions between various types of cached data
([4,17,30,33])
will benefit from and incorporate our algorithm and analysis.
root
2006-12-19