Check out the new USENIX Web site. next up previous
Next: UNIX Network Processing Up: Lazy Receiver Processing (LRP): Previous: Abstract

Introduction

Most work on operating system support for high-speed networks to date has focused on improving message latency and on delivering the network's full bandwidth to application programs [1, 5, 7, 21]. More recently, researchers have started to look at resource management issues in network servers such as LAN servers, firewall gateways, and WWW servers [16, 17]. This paper proposes a new network subsystem architecture based on lazy receiver processing (LRP), which provides stable overload behavior, fair resource allocation, and increased throughput under heavy load from the network.

State of the art operating systems use sophisticated means of controlling the resources consumed by application processes. Policies for dynamic scheduling, main memory allocation and swapping are designed to ensure graceful behavior of a timeshared system under various load conditions. Resources consumed during the processing of network traffic, on the other hand, are generally not controlled and accounted for in the same manner. This poses a problem for network servers that face a large volume of network traffic, and potentially spend considerable amounts of resources on processing that traffic.

In particular, UNIX based operating systems and many non-UNIX operating systems use an interrupt-driven network subsystem architecture that gives strictly highest priority to the processing of incoming network packets. This leads to scheduling anomalies, decreased throughput, and potential resource starvation of applications. Furthermore, the system becomes unstable in the face of overload from the network. This problem is serious even with the relatively slow current network technology and will grow worse as networks increase in speed.

We propose a network subsystem architecture that integrates network processing into the system's global resource management. Under this system, resources spent in processing network traffic are associated with and charged to the application process that causes the traffic. Incoming network traffic is scheduled at the priority of the process that receives the traffic, and excess traffic is discarded early. This allows the system to maintain fair allocation of resources while handling high volumes of network traffic, and achieves system stability under overload.

Experiments show that a prototype system based on LRP maintains its throughput and remains responsive even when faced with excessive network traffic on a 155 Mbit/s ATM network. In comparison, a conventional UNIX system collapses under network traffic conditions that can easily arise on a 10 Mbit/s Ethernet. Further results show increased fairness in resource allocation, traffic separation, and increased throughput under high load.

The rest of this paper is organized as follows. Section 2 gives a brief overview of the network subsystem found in BSD UNIX-derived systems [13] and identifies problems that arise when a system of this type is used as a network server. The design of the LRP network architecture is presented in Section 3. Section 4 gives a quantitative performance evaluation of our prototype implementation. Finally, Section 5 covers related work and and Section 6 offers some conclusions.


next up previous
Next: UNIX Network Processing Up: Lazy Receiver Processing (LRP): Previous: Abstract

Peter Druschel
Mon Sep 16 18:13:25 CDT 1996