2021
DOI: 10.1007/978-3-030-88885-5_23
|View full text |Cite
|
Sign up to set email alerts
|

pyNeVer: A Framework for Learning and Verification of Neural Networks

Abstract: Automated verification of neural networks (NNs) was first proposed in [1] and it is an established research topic with several contributions to date -see, e.g., [2]. The taxonomy proposed in [2] suggests a division among verification tools providing deterministic guarantees, e.g., Marabou [3], and those providing sound approximations, e.g., ERAN [4] and NNV [5]. pyNeVer borrows basic techniques from [5] and casts them into an abstraction approach inspired by [4]; like ERAN and NNV, it features complete verifi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
18
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 20 publications
(18 citation statements)
references
References 7 publications
0
18
0
Order By: Relevance
“…However, the complete setting in algorithm 1 potentially causes the exponential blow-up in the number of stars generated, and thus the computation might simply not be feasible. The mixed setting strikes a trade-off between complete and over-approximated setting: using an heuristic detailed in [GPT21], NEVER2 tries to concretize the least number of stars that enable proving the property without blowing the computation time. In our experiments, we consider two different sub-settings for mixed, called mixed and mixed2 which differ in the number of neurons to refine, either 1 or 2, respectively.…”
Section: Methodsmentioning
confidence: 99%
“…However, the complete setting in algorithm 1 potentially causes the exponential blow-up in the number of stars generated, and thus the computation might simply not be feasible. The mixed setting strikes a trade-off between complete and over-approximated setting: using an heuristic detailed in [GPT21], NEVER2 tries to concretize the least number of stars that enable proving the property without blowing the computation time. In our experiments, we consider two different sub-settings for mixed, called mixed and mixed2 which differ in the number of neurons to refine, either 1 or 2, respectively.…”
Section: Methodsmentioning
confidence: 99%
“…The formal verification of neural networks [19][20][21][22][23][24] is another valuable research direction for enhancing neural network robustness. Pulina and Tacchella [19] introduced a pivotal study on formal verification for neural networks.…”
Section: Related Workmentioning
confidence: 99%
“…• properties about the network can be attached to it -again, visually through the addition of special blocks in the graphical interface -and the network can be verified using our backend pyNeVer (Guidotti et al, 2021) in a "pushbutton" fashion; also in this case, the user configures pyNeVer through dialog boxes which are meant to simplify the interaction as much as possible.…”
Section: Introductionmentioning
confidence: 99%
“…Guidotti et al, 2021) defines the abstract mapping of a functional layer with n ReLU activation functions and adapts the methodology proposed inTran et al (2019).The function compute layer takes as input an indexed list of N stars Θ 1 , . .…”
mentioning
confidence: 99%