2023
DOI: 10.1109/tdsc.2022.3161477
|View full text |Cite
|
Sign up to set email alerts
|

Enhancing Backdoor Attacks With Multi-Level MMD Regularization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 37 publications
0
3
0
Order By: Relevance
“…Zhao et al [37] proposed optimization by measuring poison data through cross entropy to produce adaptive imperceptible perturbations and limit the potential representation of poison data during training to enhance the stealth of attacks and the resistance of defense. Xia et al [38] verified the distribution of multilayer representations on rule-trained backdoor models differed significantly using maximum mean difference (MMD), energy distance (ED) and sliced Wasserstein distance (SWD) as metrics. Then, a difference reduction method ML-MMDR with multilevel MMD regularization in the loss is proposed to optimize the poison data.…”
Section: B Invisible Backdoor Attackmentioning
confidence: 99%
“…Zhao et al [37] proposed optimization by measuring poison data through cross entropy to produce adaptive imperceptible perturbations and limit the potential representation of poison data during training to enhance the stealth of attacks and the resistance of defense. Xia et al [38] verified the distribution of multilayer representations on rule-trained backdoor models differed significantly using maximum mean difference (MMD), energy distance (ED) and sliced Wasserstein distance (SWD) as metrics. Then, a difference reduction method ML-MMDR with multilevel MMD regularization in the loss is proposed to optimize the poison data.…”
Section: B Invisible Backdoor Attackmentioning
confidence: 99%
“…Recently, adversarial attacks have been widely used in various fields, i.e., image classification [60,61], traffic analy-sis [43,46], autonomous driving [28,29] and object detection [39]. As for Android malware detection, there have been many studies [21,24,27,33,34] on syntax features oriented AE generation.…”
Section: Related Workmentioning
confidence: 99%
“…Although the backdoor attack has been extensively researched in multiple applications, such as computer vision (CV) [11][12][13][14][15][16][17] and natural language processing (NLP) [18][19][20][21][22][23][24][25][26][27][28][29], there is no research in the field of DGA detection. Due to the particularity of DGA , existing backdoor attacks can not be directly applied to the DGA detection approach based on deep learning.…”
Section: Introductionmentioning
confidence: 99%