2020
DOI: 10.1007/978-3-030-64793-3_4
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Deep Reinforcement Learning Based Adaptive Moving Target Defense

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 17 publications
(9 citation statements)
references
References 17 publications
0
9
0
Order By: Relevance
“…Overall, only one ACO Gym (CybORG [112] Cage Challenge 3 [53]) has recognised the need for automated multi-agent algorithms (A.4.1) as automated blue team solutions. [78] and [37] publications (specifically focusing on using RL for defending against DDoS attacks) environments could be a potential inspiration for structuring the ACO Gyms to facilitate multi-agent automated red and blue teaming collaboration (requirement G.4.1). Very few ACO Gyms facilitate adversarial training (G.6.1 and A.6.1), which could potentially utilised to strengthen the automated blue agent against a variety of cyber attacks (A.6.2).…”
Section: Combined Analysis Of All Aco Gymsmentioning
confidence: 99%
“…Overall, only one ACO Gym (CybORG [112] Cage Challenge 3 [53]) has recognised the need for automated multi-agent algorithms (A.4.1) as automated blue team solutions. [78] and [37] publications (specifically focusing on using RL for defending against DDoS attacks) environments could be a potential inspiration for structuring the ACO Gyms to facilitate multi-agent automated red and blue teaming collaboration (requirement G.4.1). Very few ACO Gyms facilitate adversarial training (G.6.1 and A.6.1), which could potentially utilised to strengthen the automated blue agent against a variety of cyber attacks (A.6.2).…”
Section: Combined Analysis Of All Aco Gymsmentioning
confidence: 99%
“…priate cost-benefit tradeoff and make useful decisions regarding when to reimage. 63 Similar approaches have allowed reinforcement learning agents to learn when to isolate potentially infected nodes within a constrained network or to develop game theoretic strategies for adaptively responding to adversaries. 64 These results are promising, because they suggest that machine learning could be useful for automating tactics like moving target defense or for providing responses to some types of threats, such as botnets.…”
Section: Response and Recoverymentioning
confidence: 99%
“…As a recent example, Cam presents a method and system for providing cyber resilience by integrating autonomous adversary and defender agents and deep reinforcement learning for predicting the current and future adversary activities, and then by enabling agents to take appropriate automated actions for preventing and mitigating adversary activities in [11]. Similarly, Taha et al [12] developed a multi-agent reinforcement framework to solve a two-player general-sum game formulated between an adversary and the defender. Sengupta et al [13] proposed a multi-agent RL algorithm that uses a Bayesian Strong Stackelberg Q-learning (BSS-Q) approach, improving the MTD for web-application security.…”
Section: A Cognitive Techniques For Cybersecurity In Network and Networked Servicesmentioning
confidence: 99%
“…An alternative model can include an additional agent, namely the attacker, enabling the MTD controller to further improve its reactive defense and attack mitigation. The environment and game theory model present added parameters for the attacker's identification and the prediction of his target [12]. Attackers' strategies can change with time, therefore the model needs to describe high-level attack patterns able to identify old and new attacks by analyzing behaviours and predicting the intentions (e.g., reconnaissance, Denial of Service (DoS), Command and Control (C&C), MitM, etc..).…”
Section: A Optsfc and Cognitive Techniques (Ai/ml Driven Control)mentioning
confidence: 99%