No abstract
Abstract-End-to-end congestion control mechanisms such as those in TCP are not enough to prevent congestion collapse in the Internet (for starters, not all applications might be willing to use them), and they must be supplemented by control mechanisms inside the network. The IRTF has singled out Random Early Detection (RED) as one queue management scheme recommended for rapid deployment throughout the Internet. However, RED is not a thoroughly understood scheme -witness for example how the recommended parameter settings, or even the various benefits RED is claimed to provide, have changed over the past few years.In this paper, we describe simple analytic models for RED, and use these models to quantify the benefits (or lack thereof) brought about by RED. In particular, we examine the impact of RED on the loss and delay suffered by bursty and less bursty traffic (such as TCP and UDP traffic, respectively). We find that (i) RED does eliminate the higher loss bias against bursty traffic observed with Tail Drop, but not by decreasing the loss rate of bursty traffic, rather by increasing that of non bursty traffic; (ii) the number of consecutive packet drops is higher with RED than Tail Drop, suggesting RED might not help as anticipated with the global synchronization of TCP flows; (iii) RED can be used to control the average queueing delay in routers and hence the end to end delay, but increases the jitter of non bursty streams. Thus, applications that generate smooth traffic, such as interactive audio applications, will suffer higher loss rates and require large playout buffers, thereby negating at least in part the lower mean delay brought about by RED.
Packet sampling methods such as Cisco's NetFlow are widely employed by large networks to reduce the amount of traffic data measured. A key problem with packet sampling is that it is inherently a lossy process, discarding (potentially useful) information. In this paper, we empirically evaluate the impact of sampling on anomaly detection metrics. Starting with unsampled flow records collected during the Blaster worm outbreak, we reconstruct the underlying packet trace and simulate packet sampling at increasing rates. We then use our knowledge of the Blaster anomaly to build a baseline of normal traffic (without Blaster), against which we can measure the anomaly size at various sampling rates. This approach allows us to evaluate the impact of packet sampling on anomaly detection without being restricted to (or biased by) a particular anomaly detection method.We find that packet sampling does not disturb the anomaly size when measured in volume metrics such as the number of bytes and number of packets, but grossly biases the number of flows. However, we find that recently proposed entropy-based summarizations of packet and flow counts are affected less by sampling, and expose the Blaster worm outbreak even at higher sampling rates. Our findings suggest that entropy summarizations are more resilient to sampling than volume metrics. Thus, while not perfect, sampling still preserves sufficient distributional structure, which when harnessed by tools like entropy, can expose hard-to-detect scanning anomalies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.