USENIX supports diversity, equity, and inclusion and condemns hate and discrimination.
vPipe: One Pipe to Connect Them All
;login: Enters a New Phase of Its Evolution
For over 20 years, ;login: has been a print magazine with a digital version; in the two decades previous, it was USENIX’s newsletter, UNIX News. Since its inception 45 years ago, it has served as a medium through which the USENIX community learns about useful tools, research, and events from one another. Beginning in 2021, ;login: will no longer be the formally published print magazine as we’ve known it most recently, but rather reimagined as a digital publication with increased opportunities for interactivity among authors and readers.
Since USENIX became an open access publisher of papers in 2008, ;login: has remained our only content behind a membership paywall. In keeping with our commitment to open access, all ;login: content will be open to everyone when we make this change. However, only USENIX members at the sustainer level or higher, as well as student members, will have exclusive access to the interactivity options. Rik Farrow, the current editor of the magazine, will continue to provide leadership for the overall content offered in ;login:, which will be released via our website on a regular basis throughout the year.
As we plan to launch this new format, we are forming an editorial committee of volunteers from throughout the USENIX community to curate content, meaning that this will be a formally peer-reviewed publication. This new model will increase opportunities for the community to contribute to ;login: and engage with its content. In addition to written articles, we are open to other ideas of what you might want to experience.
Many enterprises use the cloud to host applications such as Web services, big data analytics, and storage, which involve significant I/O activities, moving data from a source to a sink, often without even any intermediate processing; however, cloud environments tend to be virtualized, which introduces a significant overhead for I/O activity as data needs to be moved across several protection boundaries. CPU sharing among virtual machines (VMs) introduces further delays into the overall I/O processing data flow. In this article, we present an abstraction called vPipe to mitigate these problems. vPipe introduces a simple “pipe” that can connect data sources and sinks, which can be either files or TCP sockets, at the virtual machine monitor (VMM) layer. Shortcutting the I/O at the VMM layer achieves significant CPU savings and avoids scheduling latencies that degrade I/O throughput.