Abstract.A firewall is a system acting as an interface of a network to one or more external networks. It implements the security policy of the network by deciding which packets to let through based on rules defined by the network administrator. Any error in defining the rules may compromise the system security by letting unwanted traffic pass or blocking desired traffic. Manual definition of rules often results in a set that contains conflicting, redundant or overshadowed rules, resulting in anomalies in the policy. Manually detecting and resolving these anomalies is a critical but tedious and error prone task. Existing research on this problem have been focused on the analysis and detection of the anomalies in firewall policy. Previous works define the possible relations between rules and also define anomalies in terms of the relations and present algorithms to detect the anomalies by analyzing the rules. In this paper, we discuss some necessary modifications to the existing definitions of the relations. We present a new algorithm that will simultaneously detect and resolve any anomaly present in the policy rules by necessary reorder and split operations to generate a new anomaly free rule set. We also present proof of correctness of the algorithm. Then we present an algorithm to merge rules where possible in order to reduce the number of rules and hence increase efficiency of the firewall.
The Aviation Safety Reporting System collects voluntarily submitted reports on aviation safety incidents to facilitate research work aiming to reduce such incidents. To effectively reduce these incidents, it is vital to accurately identify why these incidents occurred. More precisely, given a set of possible causes, or shaping factors, this task of cause identification involves identifying all and only those shaping factors that are responsible for the incidents described in a report. We investigate two approaches to cause identification. Both approaches exploit information provided by a semantic lexicon, which is automatically constructed via Thelen and Riloff's Basilisk framework augmented with our linguistic and algorithmic modifications. The first approach labels a report using a simple heuristic, which looks for the words and phrases acquired during the semantic lexicon learning process in the report. The second approach recasts cause identification as a text classification problem, employing supervised and transductive text classification algorithms to learn models from incident reports labeled with shaping factors and using the models to label unseen reports. Our experiments show that both the heuristic-based approach and the learning-based approach (when given sufficient training data) outperform the baseline system significantly.
The firewall is usually the first line of defense in ensuring network security for an organization. However, the management of firewalls has proved to be complex, error-prone, and costly for many large-networks. Manually configured firewall rules can easily contain anomalies and mistakes. Even if the rules are anomaly-free, the presence of defects in the firewall implementation, or the firewall device, may prevent the organization from getting the desired effect. To evaluate the effectiveness of firewall policy and to validate that the firewall correctly implements the rules in the policy, a thorough analysis of network traffic data is required. However, due to the magnitude of traffic log data, and the complexity of the analysis, manual evaluation is very challenging and economically infeasible. In this paper, we tackle this problem by presenting a set of algorithms that simplify this process. By analyzing only the firewall log files, we regenerate the effective firewall rules, i.e., what the firewall is really doing. By comparing this with the original manually defined rules, we can easily find if there is any anomaly in the original rule set, and also if there is any defect in the firewall implementation. In our process, we first reduce the data size by generating primitive firewall rules by mining the firewall network traffic log using packet frequencies (MLF). We then regenerate the firewall rules from the primitive rules by applying the Firewall Rule Regeneration (FRR) algorithm which uses aggregation and a set of heuristics. Our analysis also discovers the decaying rules and dominant rules, which provides information that can be used to improve the firewall filtering performance significantly. Our experiments showed that the effective firewall rules can be regenerated to a high degree of accuracy from a small amount of data. Also, since we are using only log files, and not the actual packet data, there is no risk of exposing any sensitive data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.