2022
DOI: 10.48550/arxiv.2206.03482
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Parametric Chordal Sparsity for SDP-based Neural Network Verification

Abstract: Many future technologies rely on neural networks, but verifying the correctness of their behavior remains a major challenge. It is known that neural networks can be fragile in the presence of even small input perturbations, yielding unpredictable outputs. The verification of neural networks is therefore vital to their adoption, and a number of approaches have been proposed in recent years. In this paper we focus on semidefinite programming (SDP) based techniques for neural network verification, which are parti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 41 publications
0
1
0
Order By: Relevance
“…Most work falls into the open-loop verification category, which seeks to analyze the ML component on its own. For analyzing deep neural networks, most works focus on checking some form of input-output reachability [10,[26][27][28][29][30]. In addition, work has gone into estimating the Lipschitz constants of deep neural networks [31].…”
Section: Literature Reviewmentioning
confidence: 99%
“…Most work falls into the open-loop verification category, which seeks to analyze the ML component on its own. For analyzing deep neural networks, most works focus on checking some form of input-output reachability [10,[26][27][28][29][30]. In addition, work has gone into estimating the Lipschitz constants of deep neural networks [31].…”
Section: Literature Reviewmentioning
confidence: 99%