We chose to use non-blocking synchronization in the design and implementation of the Cache Kernel [7] operating system kernel and supporting libraries for several reasons. First, non-blocking synchronization allows synchronized code to be executed in an (asynchronous) signal handler without danger of deadlock. For instance, an asynchronous RPC handler (as described in [25]) can directly store a string into a synchronized data structure such as a hash table even though it may be interrupting another thread updating the same table. With locking, the signal handler could deadlock with this other thread.
Second, non-blocking synchronization minimizes interference between process scheduling and synchronization. For example, the highest priority process can access a synchronized data structure without being delayed or blocked by a lower priority process. In contrast, with blocking synchronization, a low priority process holding a lock can delay a higher priority process, effectively defeating the process scheduling. -comment[]. Jim Gray? Englert&Gray (91)? Blocking synchronization can also cause one process to be delayed by another lockholding process that has encountered a page fault or a cache miss. The delay here can be hundreds of thousands of cycles in the case of a page fault. This type of interference is particularly unacceptable in an OS like the Cache Kernel where real-time threads are supported and page faults (for non-real-time threads) are handled at the library level. Non-blocking synchronization also minimizes the formation of convoys which arise because several processes are queued up waiting while a single process holding a lock gets delayed.
Finally, non-blocking synchronization provides greater insulation from failures such as fail-stop process(or)s failing or aborting and leaving inconsistent data structures. Non-blocking techniques allow only a small window of inconsistency, namely during the atomic compare-and-swap sequence itself. In contrast, with lock-based synchronization the window of inconsistency spans the entire locked critical section. These larger critical sections and complex locking protocols also introduce the danger of deadlock or failure to release locks on certain code paths.
There is a strong synergy between non-blocking synchronization and the design and implementation of the Cache Kernel for performance, modularity and reliability. First, signals are the only kernel-supported form of notification, allowing a simple, efficient kernel implementation compared to more complex kernel message primitives, such as those used in V [6]. Class libraries implement higher-level communication like RPC in terms of signals and shared memory regions [25]. Non-blocking synchronization allows efficient library implementation without the overhead of disabling and enabling signals as part of access and without needing to carefully restrict the code executed by signal handlers.
Second, we simplified the kernel and allows specialization of these facilities using the C++ inheritance mechanism by implementating of most operating system mechanisms at the class library level, particularly the object-oriented RPC system [25]. Non-blocking synchronization allows the class library level to be tolerant of user threads being terminated (fail-stopped) in the middle of performing some system library function such as (re)scheduling or handling a page fault.
Finally, the isolation of synchronization from scheduling and thread deletion provided by non-blocking synchronization and the modularity of separate class libraries and user-level implementation of services leads to a more modular and reliable system design than seems feasible by using conventional approaches.
This synergy between non-blocking synchronization and good system design and implementation carries forward in the more detailed aspects of the Cache Kernel implementation. In this paper, we describe aspects of this synergy in some detail and our experience to date.
The main techniques we use for modularity, performance and reliability are atomic DCAS(or Double-Compare-and-Swap ), type-stable memory management (TSM), and contention-minimizing data structures (CMDS).
DCAS (discussed in detail in Section 5) is defined in Figure 1. That is, DCAS atomically updates locations addr1 and addr2 to values new1 and new2 respectively if addr1 holds value old1 and addr2 holds old2 when the operation is invoked.
The next section describes type-stable memory management, which facilitates implementing non-blocking synchronization as well as providing several independent benefits to the software structure. Section 3 describes the contention-minimizing data structures which have benefits in performance and reliability for lock-based as well as non-blocking synchronization. Section 4 describes our approach to minimizing the window of inconsistency and the systems benefits of doing so. Section 5 describes the non-blocking synchronization implementation in further detail with comparison to a blocking implementation. Section 6 describes the non-blocking synchronization primitives that we assumed for our approach and a potential hardware implementation. Section 7 describes the performance of our implementation using simulation to show its behavior under high contention. Section 8 describes how our effort relates to previous and current work in this area. We close with a summary of our conclusions and directions for future work.
Figure 1: Pseudo-code definition of DCAS ( Double-Compare-and-Swap )