usenix conference policies
Increasing Effective Link Bandwidth by Supressing Replicated Data
Jonathan Santos and David Wetherall, Massachusetts Institute of Technology
In the Internet today, transfer rates are often limited by the bandwidth of a bottleneck link rather than the computing power available at the ends of the links. To address this problem, we have utilized inexpensive commodity hardware to design a novel link layer caching and compression scheme that reduces bandwidth consumption. Our scheme is motivated by the prevalence of repeated transfers of the same information, as may occur due to HTTP, FTP, and DNS traffic. Unlike existing link compression schemes, it is able to detect and use the long-range correlation of repeated transfers. It also complements application- level systems that reduce bandwidth usage, e.g., Web caches, by providing additional protection at a lower level, as well as an alternative in situations where application-level cache deployment is not practical or economic.
We make three contributions in this paper. First, to motivate our scheme we show by packet trace analysis that there is significant replication of data at the packet level, mainly due to Web traffic. Second, we present an innovative link compression protocol well-suited to traffic with such long-range correlation. Third, we demonstrate by experimentation that the availability of inexpensive memory and general-purpose processors in PCs makes our protocol practical and useful at rates exceeding T3 (45 Mbps).
author = {Jonathan R. Santos and David J. Wetherall},
title = {Increasing Effective Link Bandwidth by Supressing Replicated Data},
booktitle = {1998 USENIX Annual Technical Conference (USENIX ATC 98)},
year = {1998},
address = {New Orleans, LA},
url = {https://www.usenix.org/conference/1998-usenix-annual-technical-conference/increasing-effective-link-bandwidth-supressing},
publisher = {USENIX Association},
month = jun
}
connect with us