The Asymmetric Multi-Process Event-Driven (AMPED) architecture, illustrated in Figure 5, combines the event-driven approach of the SPED architecture with multiple helper processes (or threads) that handle blocking disk I/O operations. By default, the main event-driven process handles all processing steps associated with HTTP requests. When a disk operation is necessary (e.g., because a file is requested that is not likely to be in the main memory file cache), the main server process instructs a helper via an inter-process communication (IPC) channel (e.g., a pipe) to perform the potentially blocking operation. Once the operation completes, the helper returns a notification via IPC; the main server process learns of this event like any other I/O completion event via select.
The AMPED architecture strives to preserve the efficiency of the SPED architecture on operations other than disk reads, but avoids the performance problems suffered by SPED due to inappropriate support for asynchronous disk reads in many operating systems. AMPED achieves this using only support that is widely available in modern operating systems.
In a UNIX system, AMPED uses the standard non-blocking read, write, and accept system calls on sockets and pipes, and the select system call to test for I/O completion. The mmap operation is used to access data from the filesystem and the mincore operation is used to check if a file is in main memory.
Note that the helpers can be implemented either as kernel threads within the main server process or as separate processes. Even when helpers are implemented as separate processes, the use of mmap allows the helpers to initiate the reading of a file from disk without introducing additional data copying. In this case, both the main server process and the helper mmap a requested file. The helper touches all the pages in its memory mapping. Once finished, it notifies the main server process that it is now safe to transmit the file without the risk of blocking.