2021
DOI: 10.48550/arxiv.2110.15764
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

ε-weakened Robustness of Deep Neural Networks

Abstract: This paper introduces a notation of 𝜀-weakened robustness for analyzing the reliability and stability of deep neural networks (DNNs). Unlike the conventional robustness, which focuses on the "perfect" safe region in the absence of adversarial examples, 𝜀-weakened robustness focuses on the region where the proportion of adversarial examples is bounded by user-specified 𝜀. Smaller 𝜀 means a smaller chance of failure. Under such robustness definition, we can give conclusive results for the regions where conve… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 28 publications
0
1
0
Order By: Relevance
“…Most of them provide sound DNN robustness estimation in the form of a norm ball, but typically for very small networks or with pessimistic estimation of the norm ball radius. By contrast, statistical methods [5,6,11,28,44,74,75,78] are more efficient and scalable when the structure of DNNs is complex. The primary difference between these methods and DeepPAC is that our method is model-based and thus more accurate.…”
Section: Related Workmentioning
confidence: 99%
“…Most of them provide sound DNN robustness estimation in the form of a norm ball, but typically for very small networks or with pessimistic estimation of the norm ball radius. By contrast, statistical methods [5,6,11,28,44,74,75,78] are more efficient and scalable when the structure of DNNs is complex. The primary difference between these methods and DeepPAC is that our method is model-based and thus more accurate.…”
Section: Related Workmentioning
confidence: 99%