2020
DOI: 10.1007/978-3-030-53288-8_4
|View full text |Cite
|
Sign up to set email alerts
|

Improved Geometric Path Enumeration for Verifying ReLU Neural Networks

Abstract: Neural networks provide quick approximations to complex functions, and have been increasingly used in perception as well as control tasks. For use in mission-critical and safety-critical applications, however, it is important to be able to analyze what a neural network can and cannot do. For feed-forward neural networks with ReLU activation functions, although exact analysis is NP-complete, recently-proposed verification methods can sometimes succeed. The main practical problem with neural network v… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
62
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 78 publications
(62 citation statements)
references
References 17 publications
0
62
0
Order By: Relevance
“…Finally, we evaluated the performance of PEREGRiNN by using it to verify the adversarial robustness of networks trained on the MNIST [21] dataset. Our experiments show that PEREGRiNN is on average 1.27× faster than Neurify [31], 1.24× faster than Venus [6], 1.15× faster than nnenum [4], and 1.65× faster than Marabou [19]. It also proves 27%, 19%, 10%, and 51% more properties than the other solvers, respectively.…”
Section: Introductionmentioning
confidence: 78%
See 2 more Smart Citations
“…Finally, we evaluated the performance of PEREGRiNN by using it to verify the adversarial robustness of networks trained on the MNIST [21] dataset. Our experiments show that PEREGRiNN is on average 1.27× faster than Neurify [31], 1.24× faster than Venus [6], 1.15× faster than nnenum [4], and 1.65× faster than Marabou [19]. It also proves 27%, 19%, 10%, and 51% more properties than the other solvers, respectively.…”
Section: Introductionmentioning
confidence: 78%
“…Since PEREGRiNN is a sound and complete verification algorithm, we restrict our comparison to other sound and complete algorithms. NN verifiers can be grouped into roughly three categories: (i) SMT-based methods, which encode the problem into a Satisfiability Modulo Theory problem [11,18,19] problem as a Mixed Integer Linear Program [3,[5][6][7][8]14,23,29]; (iii) Reachability based methods, which perform layer-by-layer reachability analysis to compute the reachable set [4,13,15,17,30,32,34,35]; and (iv) convex relaxations methods [10,31,33]. In general, (i), (ii) and (iii) suffer from poor scalability.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Experimental Setup. The approach is implemented in the NNV software tool for verification of deep neural networks 3 . We evaluate our approach by verifying the robustness of a set of SSNs trained on the MNIST [21] and M2NIST datasets shown in Table 1, where class "ten" corresponds to the background, and the other classes to the corresponding digits.…”
Section: Discussionmentioning
confidence: 99%
“…Finally, verification of DNNs is challenging, and presently the most complex networks remain inaccessible to the majority of methods. However, several recent approaches have focused on improving the efficiency of existing methods via parallelization and other techniques [3,35,40]. As verification work is only meaningful when paired with high-quality specifications, there has been significant work on the importance of semantics when defining system specifications against adversarial attacks [27], and our paper contributes to this direction through our formulation of robustness specifications and metrics for segmentation tasks.…”
Section: Related Workmentioning
confidence: 99%