Recent advancements in artificial intelligence (AI) technologies have induced tremendous growth in innovation and automation. Although these AI technologies offer significant benefits, they can be used maliciously. Highly targeted and evasive attacks in benign carrier applications, such as DeepLocker, have demonstrated the intentional use of AI for harmful purposes. Threat actors are constantly changing and improving their attack strategy with particular emphasis on the application of AI-driven techniques in the attack process, called
AI-based cyber attack
, which can be used in conjunction with conventional attack techniques to cause greater damage. Despite several studies on AI and security, researchers have not summarized AI-based cyber attacks enough to be able to understand the adversary’s actions and to develop proper defenses against such attacks. This study aims to explore existing studies of AI-based cyber attacks and to map them onto a proposed framework, providing insight into new threats. Our framework includes the classification of several aspects of malicious uses of AI during the cyber attack life cycle and provides a basis for their detection to predict future threats. We also explain how to apply this framework to analyze AI-based cyber attacks in a hypothetical scenario of a critical smart grid infrastructure.
<p>Cyber-physical systems are becoming more intelligent with the adoption of heterogeneous sensor networks and machine learning capabilities that deal with an increasing amount of input data. While this complexity aims to solve problems in various domains, it adds new challenges for the system assurance. One is the rise of the number of abnormal behaviors that affect system performance due to possible sensor faults and attacks. The combination of safety risks, which are usually caused by random sensor faults and security risks, which can happen at any random state of the systems, makes the full coverage testing of the cyber-physical system to be challenging. Existing techniques are inadequate to deal with complex, safety and security co-risks on cyber-physical systems. In this paper, we propose AST-SafeSec, an analysis methodology for both safety and security aspects, which utilizes reinforcement learning to identify the most-likely adversarial paths at various normal or failure states of a cyber-physical system that can influence system behavior through its sensor data. The methodology is evaluated using an autonomous vehicle scenario by incorporating security attack into the stochastic sensor elements of the vehicle. Evaluation results show that the methodology analyzes the interaction of malicious actions with random faults, and identifies the incident caused by the interactions and the most-likely path that leads to the incident.</p>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.