The RANDOM list contains essentially only demand-paged, non-sequential data. For demand paged data, LRU is a stack algorithm. In other words, increasing (respectively, decreasing) the length of the list leads to smaller (respectively, larger) number of misses.
Let denote the rate of cache misses in RANDOM. To compute the marginal utility of RANDOM, we examine changes, namely, , changes in response to a small change in . By the stack property, as increases (respectively, decreases) decreases (respectively, increases). Hence, is always negative. The marginal utility is measured by the magnitude of this quantity. Unlike SEQ list where an analytical model was necessary, for the RANDOM list the quantity can be estimated directly from real-time cache introspection.
We will monitor cache space at the bottom of RANDOM list (see Figure 2), and observe rate of hits in this region. A bottom hit is an indication that a miss has been saved due to the cache space at the bottom of RANDOM. In other words, if cache space at the bottom was not allocated to RANDOM, then the rate of misses would increase by . We assume that is a constant in a small region (generally much smaller than even ) at the bottom of the list where our locally adaptive algorithm is active. Hence, as a corollary, if a small amount of cache space were added to RANDOM, then the misses would decrease in proportion to .
However, allocating more cache space to RANDOM will take away equal amount from SEQ, and, hence, needs to be carefully weighed. The optimum is attained when marginal utilities of both the lists are equalized.