Check out the new USENIX Web site. next up previous
Next: TCP protocol processing Up: Design of the LRP Previous: Packet Demultiplexing

UDP protocol processing

For unreliable, datagram-oriented protocols like UDP, network processing proceeds as follows: The transmit side processing remains largely unchanged. Packets are processed by UDP and IP code in the context of the user process performing the send system call. Then, the resulting IP packet(s) are placed on the interface queue.

On the receiving side, the network interface determines the destination socket of incoming packets and places them on the corresponding channel queue. If that queue is full, the packet is discarded. If the queue was previously empty, and a state flag indicates that interrupts are requested for this socket, the NI generates a host interruptgif. When a user process calls a receive system call on a UDP socket, the system checks the associated channel's receive queue. If the queue is non-empty, the first packet is removed; else, the process is blocked waiting for an interrupt from the NI. After removing a packet from the receive queue, IP's input function is called, which will in turn call UDP's input function. Eventually the processed packet is copied into the application's buffer. All these steps are performed in the context of the user process performing the system call.

There are several things to note about the receiver processing. First, protocol processing for a packet does not occur until the application is waiting for the packet, the packet has arrived, and the application is scheduled to run. As a result, one might expect reduced context switching and increased memory access locality. Second, when the rate of incoming packets exceeds the rate at which the receiving application can consume the packets, the channel receive queue fills, causing the network interface to drop packets. This dropping occurs before significant host resources have been invested in the packet. As a result, the system has good overload behavior: As the offered rate of incoming traffic approaches the capacity of the server, the throughput reaches its maximum and stays at its maximum even if the offered rate increases furthergif.

It is important to realize that LRP does not increase the latency of UDP packets. The only condition under which the delivery delay of a UDP packet could increase under LRP is when a host CPU is idle between the time of arrival of the packet and the invocation of the receive system call that will deliver the packet to the application. This case can occur on multiprocessor machines, and on a uniprocessor when the only runnable application blocks on an I/O operation (e.g., disk) before invoking the receive system call. To eliminate this possibility, an otherwise idle CPU should always perform protocol processing for any received packets. This is easily accomplished by means of a kernel thread with minimal priority that checks NI channels and performs protocol processing for any queued UDP packets.


next up previous
Next: TCP protocol processing Up: Design of the LRP Previous: Packet Demultiplexing

Peter Druschel
Mon Sep 16 18:13:25 CDT 1996