2022 American Control Conference (ACC) 2022
DOI: 10.23919/acc53348.2022.9867244
|View full text |Cite
|
Sign up to set email alerts
|

Practical Convex Formulations of One-hidden-layer Neural Network Adversarial Training

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 6 publications
0
2
0
Order By: Relevance
“…Apicella et al (2021) provided a comprehensive survey on modern trainable activation functions, highlighting their importance in enhancing learning capabilities. The Gaussian error function-based activation function introduced by Chen and Pock (2016) exemplifies the trend toward smoother alternatives to ReLU, despite the computational trade-offs as reported by Bai et al (2023). Sonoda and Murata (2017) advanced this line of inquiry by adapting the Fourier series and Gaussian cumulative distribution function to devise activation functions for particular architectures.…”
Section: Related Workmentioning
confidence: 93%
“…Apicella et al (2021) provided a comprehensive survey on modern trainable activation functions, highlighting their importance in enhancing learning capabilities. The Gaussian error function-based activation function introduced by Chen and Pock (2016) exemplifies the trend toward smoother alternatives to ReLU, despite the computational trade-offs as reported by Bai et al (2023). Sonoda and Murata (2017) advanced this line of inquiry by adapting the Fourier series and Gaussian cumulative distribution function to devise activation functions for particular architectures.…”
Section: Related Workmentioning
confidence: 93%
“…The vulnerability of neural networks to adversarial attacks has been observed in various applications, such as computer vision [25,44] and control systems [31]. In response, "adversarial training" [12,13,25,36,62] has been studied to alleviate the susceptibility. Adversarial training builds robust neural networks by training on adversarial examples.…”
Section: Introductionmentioning
confidence: 99%