Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation 2019
DOI: 10.1145/3314221.3314614
|View full text |Cite
|
Sign up to set email alerts
|

Optimization and abstraction: a synergistic approach for analyzing neural network robustness

Abstract: In recent years, the notion of local robustness (or robustness for short) has emerged as a desirable property of deep neural networks. Intuitively, robustness means that small perturbations to an input do not cause the network to perform misclassifications. In this paper, we present a novel algorithm for verifying robustness properties of neural networks. Our method synergistically combines gradient-based optimization methods for counterexample search with abstraction-based proof search to obtain a sound and (… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
79
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 90 publications
(80 citation statements)
references
References 42 publications
1
79
0
Order By: Relevance
“…For the "open-loop" verification problem (verification of DNNs), many efficient techniques have been proposed, such as SMT-based methods [22,23,30], mixed-integer linear programming methods [14,24,28], setbased methods [4,17,32,33,48,50,53,57], and optimization methods [51,58]. For the "closed-loop" verification problem (NCCS verification), we note that the Verisig approach [20] is efficient for NNCS with nonlinear plants and with Sigmoid and Tanh activation functions.…”
Section: Related Workmentioning
confidence: 99%
“…For the "open-loop" verification problem (verification of DNNs), many efficient techniques have been proposed, such as SMT-based methods [22,23,30], mixed-integer linear programming methods [14,24,28], setbased methods [4,17,32,33,48,50,53,57], and optimization methods [51,58]. For the "closed-loop" verification problem (NCCS verification), we note that the Verisig approach [20] is efficient for NNCS with nonlinear plants and with Sigmoid and Tanh activation functions.…”
Section: Related Workmentioning
confidence: 99%
“…A classifier is stable for some (typically very small) perturbation of its input samples which represents an adversarial attack when it assigns the same class to all the samples within that perturbation, so that impercetible malicious alterations of input objects should not deceive a stable classifier. Formal verification methods for neural networks may rely on a number of different techniques: linear approximation of functions (Weng et al 2018;Zhang et al 2018), semidefinite relaxations (Raghunathan, Steinhardt, and Liang 2018), logical SMT solvers (Huang et al 2017;Katz et al 2017), symbolic interval propagation (Wang et al 2018a), abstract interpretation (Gehr et al 2018;Singh et al 2018; or hybrid synergistic approaches (Anderson et al 2019;Wang et al 2018b). Abstract interpretation (Cousot and Cousot 1977) is a de facto standard technique used since forty years for designing static analysers and verifiers of programming languages.…”
Section: Introductionmentioning
confidence: 99%
“…A wide spectrum of studies has been performed, and those include the generation of the adversarial examples [4,8,16,22,23,26,36] and crafting defense [4,10,11,21,28,34,35,42] against such synthetic attacks. Also, studying DNN models as software and utilizing SE techniques have been proven to be useful to validate and test the DNN model [2,15,17,24,27,39,40].…”
Section: Study On the Robustness Propertymentioning
confidence: 99%