All previously reported results for the manually modified applications were obtained with a 12 MB file cache [Patterson95, Patterson97]. We measure the sensitivity of our results to the file cache size by running the benchmarks with a smaller (6 MB) file cache, and a larger (64MB) file cache. The cache size can affect performance because the sequential read-ahead policy sometimes prefetches data that will be accessed much later, and larger cache sizes may allow more of this data to remain in memory until the future access. For example, as shown in Table 7, the performance of the original, non-hinting Gnuld improves significantly as the cache size increases, reducing the benefit that can be obtained through prefetching. The speculating Gnuld achieves relatively less benefit with a 64MB cache because many of the read calls which it can generate hints for no longer require prefetching, whereas many of the read calls it is unable to hint continue causing I/O stalls. For Agrep and XDataSlice, there is little data reuse and sequential read-ahead seldom fetches data that is accessed much later, so the cache size does not affect the benefit obtained by the hinting applications.
Benchmark | 6 MB | 12 MB | 64 MB | ||||
---|---|---|---|---|---|---|---|
Time | % improvement | Time | % improvement | Time | % improvement | ||
Agrep | Original | 21.3 | -- | 21.4 | -- | 21.2 | -- |
SpecHint | 6.5 | 69% | 6.5 | 70% | 6.4 | 70% | |
Manual | 6.3 | 70% | 6.2 | 71% | 6.1 | 71% | |
Gnuld | Original | 106.3 | -- | 89.5 | -- | 56.5 | -- |
SpecHint | 74.7 | 30% | 63.3 | 29% | 45.2 | 20% | |
Manual | 34.4 | 68% | 30.2 | 66% | 25.4 | 55% | |
XDataSlice | Original | 295.0 | -- | 324.6 | -- | 279.0 | -- |
SpecHint | 94.6 | 68% | 97.0 | 70% | 87.8 | 69% | |
Manual | 91.4 | 69% | 94.1 | 71% | 85.8 | 69% |