Kavita Bala* M. Frans Kaashoek* William E. Weihl*
MIT Laboratory for Computer Science
Cambridge, MA 02139, USA
[Footnote:E-mail: {kaybee, kaashoek, weihl}@lcs.mit.edu.
World Wide Web URL: https://www.psg.lcs.mit.edu/.
Prof. Weihl is currently supported by DEC while on sabbatical
at DEC SRC.
For a range of applications, prefetching decreases the number of kernel TLB misses by 40% to 50%, and caching decreases TLB penalties by providing a fast path for over 90% of the misses. Our caching scheme also decreases the number of nested TLB traps due to the page table hierarchy, reducing the number of kernel TLB miss traps for applications by 20% to 40%. Prefetching and caching, when used alone, each improve application performance by up to 3.5%; when used together, they improve application performance by up to 3%. On synthetic benchmarks that involve frequent communication among several different address spaces (and thus put more pressure on the TLB), prefetching improves overall performance by about 6%, caching improves overall performance by about 10%, and the two used together improve overall performance by about 12%.
Our techniques are very effective in reducing kernel TLB penalties, which currently range from 1% to 5% of application runtime for the benchmarks studied. Since processor speeds continue to increase relative to memory speeds, our schemes should be even more effective in improving application performance in future architectures.
To Become a USENIX Member, please see our Membership Information.