Check out the new USENIX Web site. next up previous
Next: Operating system ports Up: Future work Previous: Address space reorganization

Block driver improvements

As mentioned in the previous paper[1], the UML block driver currently can have only one outstanding I/O request at a time. This severely hurts the kernel's attempts to do readaheads in order to have data in memory when a process is going to need it.

The obvious fix to this is to use an asynchronous I/O (AIO) mechanism to have a larger number of outstanding requests. There is an AIO mechanism in the works, and I plan to change the block driver to use it.

In addition, there is also the possibility of doing I/O to the host with mmap instead of read and write. This opens up some interesting possibilities for doing low-overhead I/O. The files that the driver is accessing would be mapped into the UML address space. If it can be arranged that the I/O request data buffer is the correct mapped page in that region, then I/O from UML to the host is essentially free of overhead. As soon as the data is written into that buffer in the upper levels of the kernel's I/O system, the I/O is done. If the UML process is also doing mmapped I/O, then it is zero-copy from that process through UML and into the host kernel.

This would work equally well with COW and non-COW devices. The shared pages would be mapped into UML read-only. Reads would work normally. A write would cause an access fault, which would be trapped, and the fault handler would unmap the read-only page and map in its place the appropriate read-write page from the COW layer. The write would then proceed normally, with the new data going into the private writable file.

A further advantage of doing I/O this way is that the filesystem data that's shared by multiple virtual machines will not be copied once for each of them. The mapped file will occupy memory that's shared between the UMLs. This is a potentially large advantage over the current situation, where data that's used by multiple UMLs is copied separately into each one. Allowing them to share like this reduces their memory consumption on the host, and potentially increases the capacity of the physical machine to host virtual machines.


next up previous
Next: Operating system ports Up: Future work Previous: Address space reorganization
Jeff Dike 2001-09-14