So far, we have focused primarily on designing an effective sequential prefetching strategy along with an LRU-based caching policy for housing sequential data. Typical workloads contain a mix of sequential and random streams. We now turn our attention to designing an integrated cache replacement policy that manages the cache space amongst both of these classes of workloads so as to minimize the overall miss rate.