Check out the new USENIX Web site. next up previous
Next: Design of the LRP Up: UNIX Network Processing Previous: Problems

Sources of High Network Load

Network protocols and distributed application programs use flow control mechanisms to prevent a sender process from generating more traffic than the receiver process can handle. Unfortunately, flow control does not necessarily prevent overload of network server machines. Some reasons for this are:

TCP connection establishment requests (TCP SYN packets) from a large number of clients can flood a WWW server. This is true despite TCP's flow control mechanism (which regulates traffic on established connections) and TCP's exponential backoff strategy for connection establishment requests (which can only limit the rate of re-tries). The maximal rate of SYN packets is only bounded by the capacity of the network. Similar arguments apply for any server that serves a virtually unlimited client community such as the Internet.

Distributed applications built on top of a simple datagram service such as UDP must implement their own flow and congestion control mechanisms. When these mechanisms are deficient, excessive network traffic can result. Incorrect implementations of flow-controlled protocols such as TCP--not uncommon in the PC market--can have the same effect. The vulnerability of network servers to network traffic overload can be and has been exploited for security attacksgif. Thus, current network servers have a protection and security problem, since untrusted application programs running on clients can cause the failure of the shared server.

There are many examples of real-world systems that are prone to the problems discussed above. A packet filtering application-level gateway, such as a firewall, establishes a new TCP connection for every flow that passes through it. An excessive flow establishment rate can overwhelm the gateway. Moreover, a misbehaving flow can get an unfair share of the gateway's resources and interfere with other flows that pass through it. Similar problems can occur in systems that run several server processes, such as Web servers that use a process per connection; or, single process servers that use a kernel thread per connection. Scheduling anomalies, such as those related to bursty data, can be ill-afforded by systems that run multimedia applications. Apart from the above examples, any system that uses eager network processing can be livelocked by an excess of network traffic--this need not always be part of a denial of service attack, and can simply be because of a program error.

These problems make it imperative that a network server be able to control its resources in a manner that ensures efficiency and stability under conditions of high network load. The conventional, interrupt-driven network subsystem architecture does not satisfy this criterion.


next up previous
Next: Design of the LRP Up: UNIX Network Processing Previous: Problems

Peter Druschel
Mon Sep 16 18:13:25 CDT 1996