Abstract-Our goal is to characterize the traffic load in an IEEE802.11 infrastructure. This can be beneficial in many domains, including coverage planning, resource reservation, network monitoring for anomaly detection, and producing more accurate simulation models. The key issue that drives this study is traffic forecasting at each wireless access point (AP) in an hourly timescale. We conducted an extensive measurement study of wireless users on a major university campus using the IEEE802.11 wireless infrastructure. We observed a spatial locality in the most heavily utilized APs. We propose several traffic models that take into account the periodicity and recent traffic history for each AP and present a time-series forecasting methodology. Finally, we build and evaluate these forecasting algorithms and discuss our findings. I. INTRODUCTIONWireless networks are increasingly being deployed and expanded in airports, universities, corporations, hospitals, residential, and other public areas to provide wireless Internet access. Furthermore, there is an increase in peer-to-peer, streaming, and VoIP traffic over the wireless infrastructures[9], [8]. At the same time, empirical studies and performance analysis indicate dramatically low performance of real-time constrained applications over wireless LANs (such as [2] on the VoIP). Currently APs do not perform any type of forecasting or admission control and clients frequently experience failures and disconnections when there is high demand in the wireless infrastructure.The shared medium wireless LANs have more vulnerabilities and bandwidth and latency constrains than their wired counterparts. The bandwidth utilization at an AP can impact the performance of the wireless clients in terms of throughput, delay, and energy consumption. For quality of service provision, capacity planning, load balancing, and network monitoring, it is critical to understand the traffic characteristics. While there is a rich literature characterizing traffic in wired networks ([11], [10], [15], [7]), there are only a few studies available that examined wireless traffic load. The key issue that drives this study is forecasting in an hourly time scale. We aim to enable APs to perform short-term forecasting in order to perform better load balancing, admission control, and quality of service provisioning. Specifically, they can use the expected traffic estimations to decide whether or not to accept a new association request or advise a client to associate with a neighboring AP. In addition, the traffic models can assist in detecting abnormal traffic patterns (e.g., due to malicious attacks, AP or client misconfigurations and failures).
Although network intrusion detection systems (IDSs) have been studied for several years, their operators are still overwhelmed by a large number of false-positive alerts. In this work we study the following problem: from a large archive of intrusion alerts collected in a production network, we want to detect with a small number of false positives hosts within the network that have been infected by malware. Solving this problem is essential not only for reducing the falsepositive rate of IDSs, but also for labeling traces collected in the wild with information about validated security incidents. We use a 9-month long dataset of IDS alerts and we first build a novel heuristic to detect infected hosts from the on average 3 million alerts we observe per day. Our heuristic uses a statistical measure to find hosts that exhibit a repeated multi-stage malicious footprint involving specific classes of alerts. A significant part of our work is devoted to the validation of our heuristic. We conduct a complex experiment to assess the security of suspected infected systems in a production environment using data from several independent sources, including intrusion alerts, blacklists, host scanning logs, vulnerability reports, and search engine queries. We find that the false positive rate of our heuristic is 15% and analyze in-depth the root causes of the false positives. Having validated our heuristic, we apply it to our entire trace, and characterize various important properties of 9 thousand infected hosts in total. For example, we find that among the infected hosts, a small number of heavy hitters originate most outbound attacks and that future infections are more likely to occur close to already infected hosts.
Our goal is to characterize the traffic load in an IEEE802.11 infrastructure. This can be beneficial in many domains, including coverage planning, resource reservation, network monitoring for anomaly detection, and producing more accurate simulation models. We conducted an extensive measurement study of wireless users on a major university campus using the IEEE802.11 wireless infrastructure. This paper proposes and evaluates several traffic forecasting algorithms based on various traffic models that employ the periodicity, recent traffic history, and flow-related information. Finally, it discusses the impact of time-scale and history on the prediction accuracy.0-7803-9455-0/06/$20.00 (c) 2006 IEEE
The manual forensics investigation of security incidents is an opaque process that involves the collection and correlation of diverse evidence. In this work we first conduct a complex experiment to expand our understanding of forensics analysis processes. During a period of 4 weeks, we systematically investigated 200 detected security incidents about compromised hosts within a large operational network. We used data from four commonly used security sources, namely Snort alerts, reconnaissance and vulnerability scanners, blacklists, and a search engine, to manually investigate these incidents. Based on our experiment, we first evaluate the (complementary) utility of the four security data sources and surprisingly find that the search engine provided useful evidence for diagnosing many more incidents than more traditional security sources, i.e., blacklists, reconnaissance, and vulnerability reports. Based on our validation, we then identify and make publicly available a list of 165 good Snort signatures, i.e., signatures that were effective in identifying validated malware without producing false positives. In addition, we analyze the characteristics of good signatures and identify strong correlations between different signature features and their effectiveness, i.e., the number of validated incidents in which a good signature is identified. Based on our experiment, we finally introduce an IDS signature quality metric that can be exploited by security specialists to evaluate the available rulesets, prioritize the generated alerts, and facilitate the forensics analysis processes. We apply our metric to characterize the most popular Snort rulesets. Our analysis of signatures is useful not only for configuring Snort but also for establishing best practices and for teaching how to write new IDS signatures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.