Next: Status and Future Work
Up: Experiments
Previous: Benchmark
To make it easier to interpret the results, we sorted URLs by the
average uncompressed file size.
Figure 7 graphs the average size (across all
versions of a page) as a function of the
sorted URL numbers, which are used in the other graphs below.
Figure 7: Average uncompressed data size across all versions of each page, sorted
by size, shown on a log scale.
Figure 8 shows our results.
Each graph plots
the average ratio of end-to-end latency using the modified system to
latency using the unmodified system. The left column shows cases
where the client proxy caches the previous version (simple deltas),
while the right column
shows the use of optimistic deltas. The first row shows no added
content provider latency, and the second row shows 5s of added
latency.
The URLs are sorted in the same
sequence as in Figure 7. The solid line in each
graph indicates the mean of all the points in the graph, while the
dashed line indicates the break-even point.
The cost of computing deltas and patching was negligible (1-2%)
compared to the network transfer time and protocol processing overhead in all
our experiments. Moreover, the largest measured value of overhead from
computing a delta and applying the delta on the client was much less
than the typical variation in the total URL fetch
times.
Figure 8: Experimental results, showing ratios of end-to-end latency
for modified versus unmodified system, varying whether old versions
are cached on the client or sent optimistically by the server, and
whether the content provider adds 0s or 5s of latency before
returning content. Each data point represents the average across
all versions for the corresponding page. The solid line in each
graph indicates the mean of all the points in the graph, while the
dashed line indicates the break-even point. The ``simple delta''
case never experienced aborts, while the ``optimistic delta'' case
experienced aborts that are indicated with a different symbol.
From Figure 8 we draw the following conclusions:
- The pages with the
lowest index, which have the largest original file size,
tended to show more improvement than the smaller pages,
but although the general trend is upward as one moves right along
the X-axis, there are great variations from page to page.
- As expected, without added latency, many of the pages took
longer using the optimistic approach than without it. The
measurements in Figure 8(b) were taken with a simple
abort strategy in place. This strategy aborted only when the server
proxy had finished computing the delta and the amount of remaining stale
data plus the delta was more than the size of the regular response.
We expect that a smarter abort strategy, such as aborting an ongoing
optimistic transfer of stale data and ``cutting through'' new
data even as it is being received if it appears that
it is very different from the cached stale data, would cap the latency
of our
system at close to 100% of the unmodified system.
-
With 5s added
latency, most pages were received faster by the client using
optimistic deltas, with a mean improvement of 27%. In fact, the latency for
optimistic deltas with 5s added delay was consistently
somewhat less than that for simple deltas with the same delay. We attribute the
better performance of optimistic deltas to TCP's slow-start
algorithm [13]. In
the case of the optimistic deltas the transfer of the stale data opened
up the TCP congestion window, so the deltas were transferred faster in this
case than in the case of simple deltas, where the transfer of the delta had
to open up the congestion window itself.
- Nearly all of the simple deltas improved performance regardless
of added latency, which one would expect. As predicted, the
relative gain was generally better
when the fixed overhead was lower. In the
case of 0s
added latency, most of the points that showed degradation were cases of
only two versions being available (hence a greater likelihood of
variability due to external factors) and where the deltas were
40-60% of the original file size. The overall improvement was
33%, which was the best of the four configurations.
Next: Status and Future Work
Up: Experiments
Previous: Benchmark
Gaurav Banga
Tue Nov 12 20:47:38 EST 1996