Check out the new USENIX Web site. next up previous
Next: Sources of High Network Up: UNIX Network Processing Previous: Overview

Problems

We now turn to describe several problems that can arise when a system with conventional network architecture faces high volumes of network traffic. Problems arise because of four aspects of the network subsystem:

Eager receiver processing
Processing of received packets is strictly interrupt-driven, with highest priority given to the capture and storage of packets in main memory; second highest priority is given to the protocol processing of packets; and, lowest priority is given to the applications that consume the messages.

Lack of effective load shedding
Packet dropping as a means to resolve receiver overload occurs only after significant host CPU resources have already been invested in the dropped packet.

Lack of traffic separation
Incoming traffic destined for one application (socket) can lead to delay and loss of packets destined for another application (socket).

Inappropriate resource accounting
CPU time spent in interrupt context during the reception of packets is charged to the application that happens to execute when a packet arrives. Since CPU usage, as maintained by the system, influences a process's future scheduling priority, this is unfair.

Eager receiver processing has significant disadvantages when used in a network server. It gives highest priority to the processing of incoming network packets, regardless of the state or the scheduling priority of the receiving application. A packet arrival will always interrupt a presently executing application, even if any of the following conditions hold true: (1) the currently executing application is not the receiver of the packet; (2) the receiving application is not blocked waiting on the packet; or, (3) the receiving application has lower or equal priority than the currently executing process. As a result, overheads associated with dispatching and handling of interrupts and increased context switching can limit the throughput of a server under load.

Under high load from the network, the system can enter a state known as receiver livelock [20]. In this state, the system spends all of its resources processing incoming network packets, only to discard them later because no CPU time is left to service the receiving application programs. For instance, consider the behavior of the system under increasing load from incoming UDP packets gif. Since hardware interface interrupt and software interrupts have higher priority than user processes, the socket queues will eventually fill because the receiving application no longer gets enough CPU time to consume the packets. At that point, packets are discarded when they reach the socket queue. As the load increases further, the software interrupts will eventually no longer keep up with the protocol processing, causing the IP queue to fill. The problem is that early stages of receiver processing have strictly higher priority than later stages. Under overload, this causes packets to be dropped only after resources have been invested in them. As a result, the throughput of the system drops as the offered load increases until the system finally spends all its time processing packets only to discard them.

Bursts of packets arriving from the network can cause scheduling anomalies. In particular, the delivery of an incoming message to the receiving application can be delayed by a burst of subsequently arriving packets. This is because the network processing of the entire burst of packets must complete before any application process can regain control of the CPU. Also, since all incoming IP traffic is placed in the shared IP queue, aggregate traffic bursts can exceed the IP queue limit and/or exhaust the mbuf pool. Thus, traffic bursts destined for one server process can lead to the delay and/or loss of packets destined for other sockets. This type of traffic interference is generally unfair and undesirable.


next up previous
Next: Sources of High Network Up: UNIX Network Processing Previous: Overview

Peter Druschel
Mon Sep 16 18:13:25 CDT 1996