Check out the new USENIX Web site. next up previous
Next: Handling Large States Up: The Model Checking Framework Previous: Running the Entire Linux


The TCP Model

Once the TCP implementation can run in user space, the next step involves constructing the actual CMC model: allocating one or more CMC processes to run the TCP implementation and designing an environment to appropriately trigger the implementation.

In the kernel, the TCP code executes in three contexts: in user context when a user process makes a system call, in the context of a network interrupt handler when a TCP packet is received, and in the context of a timer interrupt handler when a TCP timer fires. To mimic this behavior a CMC process running the Linux TCP implementation contains the following three threads:


Table 1: The state size for the TCP model described in Section 3.3. The second column shows the amount that changes in a transition, on average. For the global variables and the stack, CMC can only detect changes at page size (4KB) granularities. For the heap, CMC detects changes at individual object granularities.
  Size of a System Average Change
  State (in KB) in a Transition (in KB)
Global Variables 78.28 11.37
Heap (average) 25.06 2.13
Stack 24.00 4.00
Total per Process 127.34 17.50
Network State 1.00 1.00
System State (Sum of two process states and a network state) 255.68 18.50


These threads once triggered can either execute to completion or can block. These threads block by yielding control to the kernel scheduler. In the real kernel, the scheduler would then execute the scheduling algorithm to determine the thread that is scheduled next. In our model, the scheduler is modified to immediately transfer control to CMC. This enables CMC to nondeterministically choose the thread that is executed next, and explore multiple thread schedules from a given state.

Similarly, the kernel timer routine is modified to allow CMC to choose the timer that fires next when multiple timers are enabled. Optionally, CMC can fire timers out of order irrespective of their expiration times. While this can lead to some behaviors not possible in a real implementation, it has the benefit of making the implementation behavior independent of the actual values of the timers.

The two kernels communicate through a network, modeled as a list of messages in the shared memory. The network model can lose, duplicate, reorder and corrupt messages. Each kernel communicates with the network through an appropriate network device driver.


next up previous
Next: Handling Large States Up: The Model Checking Framework Previous: Running the Entire Linux
Madanlal Musuvathi 2004-03-03