Check out the new USENIX Web site. next up previous
Next: 4.3 Scheduler Overhead Up: 4 Evaluation Previous: 4.1 Locking Overhead

4.2 Cache Behavior


Table 2: L2 Data cache misses per KB of transmitted data.
OS Type 6 conns 192 conns 16384 conns
UP 1.83 4.08 18.49
MsgP 37.29 28.39 40.45
ConnP-T(4) 52.25 50.38 51.39
ConnP-L(128) 28.91 26.18 40.36


Table 2 shows the number of L2 data cache misses per KB of payload data transmitted, effectively normalizing cache hierarchy efficiency to network bandwidth. The uniprocessor kernel incurs very few cache misses relative to the multiprocessor configurations because of the lack of migration. As connections are added, the associated increase in connection state stresses the cache and directly results in increased cache misses [5,6].

The parallel network stacks incur significantly more cache misses per KB of transmitted data because of data migration and lock accesses. Surprisingly, ConnP-T(4) incurs the most cache misses despite each thread being pinned to a specific processor. While thread pinning can improve locality by eliminating migration of connection metadata, frequently updated socket metadata is still shared between the application and protocol threads, which leads to data migration and a higher cache miss rate.


next up previous
Next: 4.3 Scheduler Overhead Up: 4 Evaluation Previous: 4.1 Locking Overhead