Check out the new USENIX Web site.



next up previous
Next: Comparison to Blocking Up: Non-Blocking Synchronization Implementation Previous: The Base Approach

Dealing with Multiple Lists

A descriptor that is supposed to be on multiple lists simultaneously complicates these procedures. So far, we have found it feasible to program so that a descriptor can be in a subset of the lists, and inserted or deleted in each list atomically as separate operations. In particular, all the data structures that allow a descriptor to be absent from a list allow the descriptor to be inserted incrementally.

Overall, the major Cache Kernel [7] data structures are synchronized in a straightforward manner. Threads are in two linked lists: the ready queue and the delay queue. Descriptor free lists are operated as stacks, making allocation and deallocation simple and inexpensive. The virtual to physical page maps are stored in a tree of depth 3 with widths of 128, 128, and 64 respectively. Although the 128 immediate descendants of the root are never deleted, sub-trees below them can be unloaded. Modifications to a map on level 3 are synchronized using DCAS with its parent's version number to make sure that the entire subtree has not been modified in conflict with this update. Finally, the Cache Kernel maintains a ``dependency map'' that records dependencies between objects, including physical to virtual mappings. It is implemented as a fixed-size hash table with linked lists in each bucket. The signal mapping cache structure, (an optimization for signal delivery to active threads), is also a direct mapped hash table with linked lists in each bucket. The majority of uses of single CAS are for audit and counters.

Synchronization of more complex data structures than we have encountered can be handled by each operation allocating, initializing and enqueuing a ``message'' for a server process that serially executes the requested operations. Read-only operations can still proceed as before, relying on a version number incremented by the server process. Moreover, the server process can run at high priority, and include code to back out of an operation on a page fault and therefore not really block the operation anymore than if the operation was executed directly by the requesting process. The server process can also be carefully protected against failure so the data structure is protected against fail-stop behavior of a random application thread, which may be destroyed by the application.

This approach was used by Pu and Massalin [17]. For example, a general-purpose memory page allocator can be synchronized in this manner, relying on a TSM memory pool to minimize the access to the general allocator. However, in our code to date, the only case of queueing messages for a server module arises with device I/O. This structure avoids waiting for the device I/O to complete and is not motivated by synchronization issues.

Other work has investigated other alternatives or optimizations of this approach, in which helper functions are executed by a new thread if there is work left to complete or rollback by a previous thread accessing this data structure. For example, Israeli et al. [16] describe a non-blocking heap implemented using 2-word LL/ SC along these lines, performing multiple updates as multiple distinct operations. However, to date, we have not needed to employ these so-called helper techniques and therefore cannot comment on their actual practicality or utility. Moreover, it seems questionable from a reliability standpoint to have threads from separate address spaces sharing access to complex data structures. These data structures are also more difficult to program and to maintain and often provide marginal performance benefits in practice, particularly when synchronization overhead is taken into account. Their asymptotic performance benefits are often not realized at the scale of typical operating system data structures.



next up previous
Next: Comparison to Blocking Up: Non-Blocking Synchronization Implementation Previous: The Base Approach



Michael Greenwald
Wed Sep 18 15:42:13 PDT 1996