Next: Experiment 3: Sensitivity to
Up: Results
Previous: Experiment 1:
In this experiment we fix the spare capacity of the network to 1
and vary the number
of background flows.
Figure 3 plots the latency of
foreground document transfers
against the number of background flows. Even with 100
background Nice flows, the latency of foreground documents is hardly
distinguishable from the ideal case when routers provide strict prioritization.
On the other hand,
Reno and Vegas background flows can cause foreground latencies to
increase by orders of magnitude.
Figure 4 plots the number of bytes the background
flows manage to transfer. A single background flow reaps about half
the spare bandwidth available under router prioritization; this
background throughput improves with increasing number of
background flows but remains below router prioritization. The
difference is the price we pay for ensuring non-interference with an
end-to-end algorithm. Note
that although Reno
and Vegas obtain better throughput, even for a small number of flows
they go beyond the router prioritization line, which means they
steal bandwidth from foreground traffic.
Figure 3:
Number of BG flows vs Latency
|
Figure 4:
Number of BG flows vs BG throughput
|
We also examine experiments where we do not allow Nice's congestion
window to fall below 1 (graph omitted). In this case, when the number of background flows
exceeds about 10, the latency of foreground flows begins to increase
noticeably; the increase is about a factor of two when the number of
background flows is .
Figure 5:
Threshold vs FG latency
|
Next: Experiment 3: Sensitivity to
Up: Results
Previous: Experiment 1:
Arun Venkataramani
2002-10-08