2020
DOI: 10.1609/aaai.v34i04.5944
|View full text |Cite
|
Sign up to set email alerts
|

Fastened CROWN: Tightened Neural Network Robustness Certificates

Abstract: The rapid growth of deep learning applications in real life is accompanied by severe safety concerns. To mitigate this uneasy phenomenon, much research has been done providing reliable evaluations of the fragility level in different deep neural networks. Apart from devising adversarial attacks, quantifiers that certify safeguarded regions have also been designed in the past five years. The summarizing work in (Salman et al. 2019) unifies a family of existing verifiers under a convex relaxation framework. We dr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
47
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 38 publications
(47 citation statements)
references
References 8 publications
0
47
0
Order By: Relevance
“…These methods offer exactness guarantees but are based on solving NP-hard optimization problems, which can make them intractable even for small networks. Incomplete methods can be divided into bound propagation approaches [Gowal et al 2019;Müller et al 2020;Singh et al 2018Singh et al , 2019b] and those that generate polynomially-solvable optimization problems [Bunel et al 2020a;Dathathri et al 2020;Lyu et al 2020;Raghunathan et al 2018;Singh et al 2019a;Xiang et al 2018] such as linear programming (LP) or semidefinite programming (SDP) optimization problems. Compared to deterministic certification methods, randomized smoothing [Cohen et al 2019;Lecuyer et al 2018;Salman et al 2019a] is a defence method providing only probabilistic guarantees and incurring significant runtime costs at inference time, with the generalization to arbitrary safety properties still being an open problem.…”
Section: Effectiveness Of Sblm and Pddm For Convex Hull Computationsmentioning
confidence: 99%
“…These methods offer exactness guarantees but are based on solving NP-hard optimization problems, which can make them intractable even for small networks. Incomplete methods can be divided into bound propagation approaches [Gowal et al 2019;Müller et al 2020;Singh et al 2018Singh et al , 2019b] and those that generate polynomially-solvable optimization problems [Bunel et al 2020a;Dathathri et al 2020;Lyu et al 2020;Raghunathan et al 2018;Singh et al 2019a;Xiang et al 2018] such as linear programming (LP) or semidefinite programming (SDP) optimization problems. Compared to deterministic certification methods, randomized smoothing [Cohen et al 2019;Lecuyer et al 2018;Salman et al 2019a] is a defence method providing only probabilistic guarantees and incurring significant runtime costs at inference time, with the generalization to arbitrary safety properties still being an open problem.…”
Section: Effectiveness Of Sblm and Pddm For Convex Hull Computationsmentioning
confidence: 99%
“…While our optimal bounds significantly improve precision compared to intervals, they are not sufficient to certify robustness. Prior work for ReLU networks [5,12,23] showed that the greedy approach of always selecting the optimal bounds minimizing the gap can yield less precise results than an adaptive strategy which computes bounds guided by the certification problem. Based on this observation, we introduce a novel approach based on splitting and gradient descent that computes polyhedral abstractions for non-linearities employed in LSTMs informed by the certification problem and proves that min h 2 − h 1 > 0 actually holds.…”
Section: Precise Polyhedral Bounds Via Lpmentioning
confidence: 99%
“…Our method is also faster as it performs expensive gradient-based optimization for only the output layer whereas [21] performs this step for each neuron in the LSTM twice. [5,12,23] also suggest a similar idea of bounding ReLU's lower bound using gradient descent, but their approach is limited to unary functions with trivial candidates, not applicable to our setting which requires handling complex binary operations with non-trivial initial bounds.…”
Section: Precise Polyhedral Bounds Via Lpmentioning
confidence: 99%
See 1 more Smart Citation
“…Unfortunately, the DNN formal verification problem is NP-complete even for simple neural networks and specifications [37,39], and becomes exponentially harder as the network size increases. Still, great efforts are being put into devising verification schemes that can solve average instances of the problem quickly, and which support the verification of additional kinds of DNNs and properties [3][4][5]8,14,17,[21][22][23]25,[34][35][36][37][38][39][40]42,44,45,48,50,51,56,58,61,62,65,66,[69][70][71][72][73].…”
Section: Introductionmentioning
confidence: 99%