Despite the growing popularity of machine learning models in the cyber-security applications (e.g., an intrusion detection system (IDS)), most of these models are perceived as a black-box. The eXplainable Artificial Intelligence (XAI) has become increasingly important to interpret the machine learning models to enhance trust management by allowing human experts to understand the underlying data evidence and causal reasoning. According to IDS, the critical role of trust management is to understand the impact of the malicious data to detect any intrusion in the system. The previous studies focused more on the accuracy of the various classification algorithms for trust in IDS. They do not often provide insights into their behavior and reasoning provided by the sophisticated algorithm. Therefore, in this paper, we have addressed XAI concept to enhance trust management by exploring the decision tree model in the area of IDS. We use simple decision tree algorithms that can be easily read and even resemble a human approach to decision-making by splitting the choice into many small subchoices for IDS. We experimented with this approach by extracting rules in a widely used KDD benchmark dataset. We also compared the accuracy of the decision tree approach with the other state-of-the-art algorithms.
To design and develop AI-based cybersecurity systems (e.g., intrusion detection system (IDS)), users can justifiably trust, one needs to evaluate the impact of trust using machine learning and deep learning technologies. To guide the design and implementation of trusted AI-based systems in IDS, this paper provides a comparison among machine learning and deep learning models to investigate the trust impact based on the accuracy of the trusted AI-based systems regarding the malicious data in IDs. The four machine learning techniques are decision tree (DT), K nearest neighbour (KNN), random forest (RF), and naïve Bayes (NB). The four deep learning techniques are LSTM (one and two layers) and GRU (one and two layers). Two datasets are used to classify the IDS attack type, including wireless sensor network detection system (WSN-DS) and KDD Cup network intrusion dataset. A detailed comparison of the eight techniques’ performance using all features and selected features is made by measuring the accuracy, precision, recall, and F1-score. Considering the findings related to the data, methodology, and expert accountability, interpretability for AI-based solutions also becomes demanded to enhance trust in the IDS.
Identifying network attacks is a very crucial task for Internet of things (IoT) security. The increasing amount of IoT devices is creating a massive amount of data and opening new security vulnerabilities that malicious users can exploit to gain access. Recently, the research community in IoT Security has been using a data- driven approach to detect anomaly, intrusion, and cyber-attacks. However, getting accurate IoT attack data is time-consuming and expensive. On the other hand, evaluating complex security systems requires costly and sophisticated modeling practices with expert security professionals. Thus, we have used simulated datasets to create different possible scenarios for IoT data labeled with malicious and non-malicious nodes. For each scenario, we tested off a shelf machine learning algorithm for malicious node detection. Experiments on the scenarios demonstrate the benefits of the simulated datasets to assess the performance of the ML algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.