Check out the new USENIX Web site. next up previous
Next: 7. Conclusions Up: Circus Previous: 5. Experimental Results

Subsections

   
6. Related Work

6.0.0.1 File Transfer and File Sharing.

FTP and HTTP transfer objects sequentially, relying on the TCP transport to preserve byte ordering. With marker blocks [23] it is possible to restart a transmission after a failure. Raman et al. improve the interactive transfer of images over the Internet by delivering data to the client as they arrive, weakening the in-order abstraction of TCP [24]. Diot and Gagnon examine benefits of out-of-sequence packet processing [15], but do not consider large file delivery or interactions with storage devices.

Many wide-area storage systems allow a client to download different parts of a file from multiple servers (e.g., BitTorrent [14]); these clients resequence the data to tolerate out-of-order delivery. Acharya et al. propose a server architecture for repetitive transmission of data over a broadcast channel [1]. The frequency of transmitted data is determined by data popularity across the served client population.

6.0.0.2 Forward Error Correction.

Digital Fountain [9] encodes content with forward error correcting (FEC) codes (e.g., Tornado codes) for distribution over a multicast network. FEC allows a client to reconstruct a file once it has received a minimum number of distinct blocks. This approach eliminates the need for acknowledgments in a multicast setting. The system can be extended to transport large files to a client from multiple collaborating sources in overlay networks [8].

In a unicast network, FEC encoding can be applied to improve caching efficiency at the server [26]. Since the client can reconstruct the data from any sufficiently large subset of the encoded blocks, a block fetched from disk may be useful to multiple clients with different request arrival times and different rates. If a block is lost, another may be sent in its place, avoiding the need for the server to buffer data for retransmission. However, duplicate blocks waste client bandwidth; in a typical heterogeneous environment, where client receiving rates can differ by several orders of magnitude, the encoded version of the transmitted file is much larger than the original to limit the probability that any arbitrary block is a duplicate for some active client [9,26]. Recent theoretical work begins to address this problem [19]; if a satisfactory solution is found, then FEC could meet our objectives for downloading large files with a high degree of sharing with low network impact, e.g., when multicasting is available.

Circus demonstrates a technique with similar goals for content distribution: to maximize the advantage of data sharing across concurrent requests, while allowing clients at different rates to reassemble the requested file quickly and efficiently. However, Circus does not use FEC codes, and it is effective for unicast, although it could also benefit from multicast.

6.0.0.3 Stream Merging Methods.

A class of merging methods for multicast delivery of streaming media allows a client to receive data transmitted concurrently to other clients [17,18]. These file segmentation schemes balance the server network throughput, the client network throughput, and the playback initiation latency. Those schemes are significantly different from Circus because they have been specifically designed to support real-time delivery guarantees over reliable multicast-enabled networks assuming a fixed receiving rate for each client. In contrast, Circus supports efficient file (or file segment) download transfers over unicast networks for clients with different rates, and exploits the complementary technique of block reordering.

The insights underlying our approach are related to Steere's work with asynchronous set iterators [29], although our approach does not affect the order in which data is delivered to a client application.


next up previous
Next: 7. Conclusions Up: Circus Previous: 5. Experimental Results