2022 IEEE Security and Privacy Workshops (SPW) 2022
DOI: 10.1109/spw54247.2022.9833884
|View full text |Cite
|
Sign up to set email alerts
|

Parameterizing Activation Functions for Adversarial Robustness

Abstract: The bulk of existing research in defending against adversarial examples focuses on defending against a single (typically bounded p -norm) attack, but for a practical setting, machine learning (ML) models should be robust to a wide variety of attacks. In this paper, we present the first unified framework for considering multiple attacks against ML models. Our framework is able to model different levels of learner's knowledge about the test-time adversary, allowing us to model robustness against unforeseen attac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 16 publications
(4 citation statements)
references
References 15 publications
0
4
0
Order By: Relevance
“…SiLU further improves the accuracy (Table 1 row 16), which echoes the result in [57]. Recently, Dai et al [9] added learnable parameters to original non-parametric functions, and proposed the parametric counterparts, e.g., ReLU to Parametric ReLU (PReLU) and SiLU to Parametric SiLU (PSiLU) or Parametric Shifted SiLU (PSSiLU). These parametric functions outperform the non-parametric ones on CIFAR-10 [28].…”
Section: Block-level Designmentioning
confidence: 66%
See 2 more Smart Citations
“…SiLU further improves the accuracy (Table 1 row 16), which echoes the result in [57]. Recently, Dai et al [9] added learnable parameters to original non-parametric functions, and proposed the parametric counterparts, e.g., ReLU to Parametric ReLU (PReLU) and SiLU to Parametric SiLU (PSiLU) or Parametric Shifted SiLU (PSSiLU). These parametric functions outperform the non-parametric ones on CIFAR-10 [28].…”
Section: Block-level Designmentioning
confidence: 66%
“…A huge number of AT variants have been proposed, e.g., TRADES [64], AWP [56], ADT [15], DART [52], MART [53], CAS [4], Max-Margin AT [14], etc. For the robust DNN research, only a few studies explored how architectures affect robustness [13,36,46,48], e.g., depths [60], widths [55] and activation functions [9,57]. However, the total model capacity is unconstrained along with the architecture modifications.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Self-COnsistent Robust Error (SCORE) [121] employs local equivariance to minimize the robustness error of a network, easing the reconciliation between robustness and accuracy and dealing with worst-case uncertainty. Parametric Shifted Sigmoidal Linear Unit (PSSiLU) [122] changes the activation function to create high finite curvature and output positive results with negative inputs, creating a parameterized activation function, that is used with adversarial training.…”
Section: Change Lossmentioning
confidence: 99%