Protocol processing is slightly more complex for a reliable, flow-controlled protocol such as TCP. As in the original architecture, data written by an application is queued in the socket queue. Some data may be transmitted immediately in the context of the user process performing the send system call. The remaining data is transmitted in response to arriving acknowledgments, and possibly in response to timeouts.
The main difference between UDP and TCP processing in the LRP architecture is that receiver processing cannot be performed only in the context of a receive system call, due to the semantics of TCP. Because TCP is flow controlled, transmission of data is paced by the receiver via acknowledgments. Achieving high network utilization and throughput requires timely processing of incoming acknowledgments. If receiver processing were performed only in the context of receive system calls, then at most one TCP congestion window of data could be transmitted between successive receive system calls, resulting in poor performance for many applications.
The solution is to perform receiver processing for TCP sockets asynchronously when required. Packets arriving on TCP connections can thus be processed even when the application process is not blocked on a receive system call. Unlike in conventional architectures, this asynchronous protocol processing does not take strict priority over application processing. Instead, the processing is scheduled at the priority of the application process that uses the associated socket, and CPU usage is charged back to that application. Under normal conditions, the application has a sufficiently high priority to ensure timely processing of TCP traffic. If an excessive amount of traffic arrives at the socket, the application's priority will decay as a result of the high CPU usage. Eventually, the protocol processing can no longer keep up with the offered load, causing the channel receiver queue to fill and packets to be dropped by the NI. In addition, protocol processing is disabled for listening sockets that have exceeded their listen backlog limit, thus causing the discard of further SYN packets at the NI channel queue. As shown in Section 4, TCP sockets enjoy similar overload behavior and traffic separation as UDP sockets under LRP.
There are several ways of implementing asynchronous protocol processing (APP). In systems that support (kernel) threads (i.e., virtually all modern operating systems), an extra thread can be associated with application processes that use stream (TCP) sockets. This thread is scheduled at its process's priority and its CPU usage is charged to its process. Since protocol processing always runs to completion, no state needs to be retained between activations. Therefore, it is not necessary to assign a private runtime stack to the APP thread; a single per CPU stack can be used instead. The resulting per-process space overhead of APP is one thread control block. This overhead can be further reduced through the use of continuations [3]. The exact choice of a mechanism for APP greatly depends on the facilities available in a particular UNIX kernel. In our current prototype implementation, a kernel process is dedicated to TCP processing.