2021
DOI: 10.1049/ise2.12050
|View full text |Cite
|
Sign up to set email alerts
|

An optimal defensive deception framework for the container‐based cloud with deep reinforcement learning

Abstract: Defensive deception is emerging to reveal stealthy attackers by presenting intentionally falsified information. To implement it in the increasing dynamic and complex cloud, major concerns remain about the establishment of precise adversarial model and the adaptive decoy placement strategy. However, existing studies do not fulfil both issues because of ( 1) the insufficiency on extracting potential threats in virtualisation technique, (2) the inadequate learning on the agility of target environment, and (3) the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
5

Relationship

1
9

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 43 publications
0
5
0
Order By: Relevance
“…The authors of [19] and [20] design a deep reinforcement learning agents that place fake microservices replicas in order to maximize their capability to lure attacks directed toward legitimate microservices and to conceals assets, respectively. Similarly, [21] proposes a two-phase honeypot allocation algorithm combining game theory and reinforcement learning techniques in order to model and dynamically adapt the honeypot allocation according to the attacker activity.…”
Section: Related Workmentioning
confidence: 99%
“…The authors of [19] and [20] design a deep reinforcement learning agents that place fake microservices replicas in order to maximize their capability to lure attacks directed toward legitimate microservices and to conceals assets, respectively. Similarly, [21] proposes a two-phase honeypot allocation algorithm combining game theory and reinforcement learning techniques in order to model and dynamically adapt the honeypot allocation according to the attacker activity.…”
Section: Related Workmentioning
confidence: 99%
“…The authors in [47] utilise online learning to update defence models with newly collected attack information, although this is of a 'non-continual' variety, meaning continual learning techniques have not been implemented to address concerns regarding catastrophic interference, thereby failing to meet requirement A.3.1. Leveraging the approximations of DRL, Li et al [71] proposes an optimal defensive deception framework by creating System Risk Graphs (SRG) which model adversary actions. The attack models are then used to train a DRL agent to generate optimal deployment strategies within micro-service architectures.…”
Section: Automated Blue Team Solutionsmentioning
confidence: 99%
“…Li et al [56] proposed a defensive deception framework for container-based clouds. Their approach generates an adversarial model, decoy placement strategy, and decoy routing tables using a DRL algorithm.…”
Section: Kubanomaly Agent's Privilege Permission and Openmentioning
confidence: 99%