Check out the new USENIX Web site. next up previous
Next: Statistical Loss Guarantees Up: Evaluation Previous: Evaluation

Evaluation Methodology

Our evaluation methodology is two-fold: (1) we use wide area experiments to evaluate how OverQoS performs in practice, and (2) we use simulations to get a better understanding of the OverQoS performance over a wider range of network conditions.

Wide-Area Evaluation Testbed: Using resources available in two large wide-area test-beds namely RON [32] and PlanetLab [28], we construct a network of 19 nodes in diverse locations: $ 6$ university nodes in Europe, $ 1$ site in Korea, $ 1$ in Canada, $ 3$ company nodes, $ 8$ behind access networks (Cable, DSL). Our main goal in choosing these nodes is to test OverQoS across wide-area links which we believe are lossy. For this reason, we avoided nodes at US universities connected to Internet2 which are known to have very few losses [7].

Simulation Environment: We built all the functionalities of our OverQoS architecture on top of the ns-2 simulator version 2.1b8. Unless otherwise specified, most of our simulations use a simple topology consisting of a single congested link of $ 10$ Mbps where we vary the background traffic to realize different types of traffic loss patterns. We use three commonly used bursty traffic models as background traffic: (a) long lived TCP connections; (b) Self similar traffic [36]; (c) Web traffic [15]. In addition, we use publicly available loss traces to test the performance of a CLVL.

Figure 7: Simulations: Achieved loss rate by a CLVL across three types of background traffic. We set $ q=0.1\%$ and the bottleneck link is 10 Mbps using RED queue.
\begin{figure}\small {
\begin{tabular}{\vert l\vert l\vert l\vert}
\hline
Backgr...
...\hline
400 Web sessions & 0.68 & 0.03 \% \\
\hline
\end{tabular}}\end{figure}


next up previous
Next: Statistical Loss Guarantees Up: Evaluation Previous: Evaluation
116 2004-02-12