2019
DOI: 10.1609/aaai.v33i01.33013240
|View full text |Cite
|
Sign up to set email alerts
|

CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks

Abstract: Verifying robustness of neural network classifiers has attracted great interests and attention due to the success of deep neural networks and their unexpected vulnerability to adversarial perturbations. Although finding minimum adversarial distortion of neural networks (with ReLU activations) has been shown to be an NP-complete problem, obtaining a non-trivial lower bound of minimum distortion as a provable robustness guarantee is possible. However, most previous works only focused on simple fully-connected la… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
141
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 118 publications
(142 citation statements)
references
References 3 publications
1
141
0
Order By: Relevance
“…Robustness of models with respect to adversarial examples is an active field of research Boopathy et al 2019;Cisse et al 2017;Gu and Rigazio 2014;Carlini and Wagner 2017b;Metzen et al 2017;Carlini and Wagner 2017a). Arnab et al (2018) evaluate the robustness of semantic segmentation models for adversarial attacks of a wide variety of network architectures (e.g.…”
Section: Related Workmentioning
confidence: 99%
“…Robustness of models with respect to adversarial examples is an active field of research Boopathy et al 2019;Cisse et al 2017;Gu and Rigazio 2014;Carlini and Wagner 2017b;Metzen et al 2017;Carlini and Wagner 2017a). Arnab et al (2018) evaluate the robustness of semantic segmentation models for adversarial attacks of a wide variety of network architectures (e.g.…”
Section: Related Workmentioning
confidence: 99%
“…in the same class by N. Note that, as MSR(N, x) is defined in the embedding space, which is continuous, the perturbation space, Ball(x, ), contains meaningful texts as well as texts that are not syntactically or semantically meaningful. In order to compute l we leverage constraint relaxation techniques developed for CNNs (Boopathy et al, 2019) and LSTMs (Ko et al, 2019), namely CNN-Cert and POPQORN. For an input text x and a hyperbox around Ball(x, ), these techniques find linear lower and upper bounds for the activation functions of each layer of the neural network and use these to propagate an over-approximation of the hyperbox through the network.…”
Section: Lower Bound: Constraint Relaxationmentioning
confidence: 99%
“…In this work, we exploit approximate, scalable, linear constraint relaxation methods (Weng et al, 2018a;Wong and Kolter, 2018), which do not assume Lipschitz continuity. In particular, we adapt the CNN-Cert tool (Boopathy et al, 2019) and its recurrent extension POPQORN (Ko et al, 2019) to compute robustness guarantees for text classification in the NLP domain. We note that NLP robustness has also been addressed using interval bound propagation Jia et al, 2019).…”
Section: Introductionmentioning
confidence: 99%
“…The authors extended their work to overcome the limitation of simple fully-connected (a) (δ, ε)-parameter robustness for δ = 0.005 (b) (δ, ε)-parameter robustness for δ = 0.01 (c) δσ-parameter robustness for δ = 0.005 (d) δσ-parameter robustness for δ = 0.01 layers and ReLU activations to propose CNN-Cert. The new framework can handle various architectures including convolutional layers, max-pooling layers, batch normalization layer, residual blocks, as well as general activation functions and capable of certifying robustness on general convolutional neural networks [3]. Data-centric approaches entail identifying and rejecting perturbed samples, or increasing the training data to handle perturbations appropriately.…”
Section: Related Workmentioning
confidence: 99%