2022
DOI: 10.3389/frai.2021.780843
|View full text |Cite
|
Sign up to set email alerts
|

Adversarially Robust Learning via Entropic Regularization

Abstract: In this paper we propose a new family of algorithms, ATENT, for training adversarially robust deep neural networks. We formulate a new loss function that is equipped with an additional entropic regularization. Our loss function considers the contribution of adversarial samples that are drawn from a specially designed distribution in the data space that assigns high probability to points with high loss and in the immediate neighborhood of training samples. Our proposed algorithms optimize this loss to seek adve… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 23 publications
(52 reference statements)
0
4
0
Order By: Relevance
“…However, they do not show EnSGD to be effective for nonsmooth loss functions. In addition, though several studies [35], [36] apply Langevin dynamics to generate attacks, there are few studies that apply it to optimize parameters in AT. Thus, the effectiveness for AT is still unclear due to nonsmoothness.…”
Section: Entropysgdmentioning
confidence: 99%
“…However, they do not show EnSGD to be effective for nonsmooth loss functions. In addition, though several studies [35], [36] apply Langevin dynamics to generate attacks, there are few studies that apply it to optimize parameters in AT. Thus, the effectiveness for AT is still unclear due to nonsmoothness.…”
Section: Entropysgdmentioning
confidence: 99%
“…This aligns with [28], where the authors demonstrated that the local Lipschitz constant can be used to explicitly quantify the robustness of machine learning models. Many empirical defensive approaches, such as hessian/curvature-based regularization [48], gradient magnitude penalty [73], smoothening with random noise [44], or entropy regularization [33], have echoed the flatness performance of a robust model. However, all the above approaches require significant computational or memory resources, and many of them, such as hessian/curvature-based solutions, may suffer from standard accuracy decreases [26].…”
Section: Motivationmentioning
confidence: 99%
“…We adopt adversarial re-training [11], [12] to train a model that is exposed to a wide variation of localities, i.e., subgraph structures around key-gates. The motivation for adversarial retraining is as follows: we want to create adversarial subgraph embeddings where the attack model M *…”
Section: B Adversarially Trained Attacker's Model M *mentioning
confidence: 99%