2019 IEEE/ACM 41st International Conference on Software Engineering: New Ideas and Emerging Results (ICSE-NIER) 2019
DOI: 10.1109/icse-nier.2019.00032
|View full text |Cite
|
Sign up to set email alerts
|

Robustness of Neural Networks: A Probabilistic and Practical Approach

Abstract: Neural networks are becoming increasingly prevalent in software, and it is therefore important to be able to verify their behavior. Because verifying the correctness of neural networks is extremely challenging, it is common to focus on the verification of other properties of these systems. One important property, in particular, is robustness. Most existing definitions of robustness, however, focus on the worst-case scenario where the inputs are adversarial. Such notions of robustness are too strong, and unlike… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
26
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 54 publications
(26 citation statements)
references
References 14 publications
0
26
0
Order By: Relevance
“…Other techniques to generate test data to check the robustness of neural networks include symbolic execution [101], [104], fuzz testing [83], combinatorial Testing [141], and abstract interpretation [182]. In Section 5.1, we introduce more contents about test generation techniques.…”
Section: Perturbation Targeting Test Datamentioning
confidence: 99%
“…Other techniques to generate test data to check the robustness of neural networks include symbolic execution [101], [104], fuzz testing [83], combinatorial Testing [141], and abstract interpretation [182]. In Section 5.1, we introduce more contents about test generation techniques.…”
Section: Perturbation Targeting Test Datamentioning
confidence: 99%
“…These state-based metrics could be combined with L p -norm. Several state-based metrics could be referred to adversarial distance (Papernot et al, 2015), average robustness (Moosavi-Dezfooli et al, 2015), adversarial severity (Bastani et al, 2016), adversarial frequency (Bastani et al, 2016), point-wise robustness (Bastani et al, 2016), local/global adversarial robustness (Katz et al, 2017), neuron coverage (Pei et al, 2017), a set of multigranularity testing criteria (Ma et al, 2018), Lipschitz continuity (Sun et al, 2018b), sign-sign coverage (Sun et al, 2018a), and probabilistic robustness (Mangal et al, 2019 Alcorn et al (2018) apply natural transformation such as changing viewpoint, lighting, coloring. Naderi et al (2021) propose a method based on geometric transformations to generate natural perturbations.…”
Section: Related Workmentioning
confidence: 99%
“…They follow a statistical way to estimate the label change rate of an input, which motivates us to give a formal definition of the property showing a low label change rate, and to consider the verification problem for such a property. Below we recall the definition of quantitative robustness [27], where we have a parameter 0 < η ≤ 1 representing the confidence of robustness.…”
Section: Quantitative Robustness Verificationmentioning
confidence: 99%