2019
DOI: 10.48550/arxiv.1911.09032
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Outside the Box: Abstraction-Based Monitoring of Neural Networks

Thomas A. Henzinger,
Anna Lukina,
Christian Schilling

Abstract: Neural networks have demonstrated unmatched performance in a range of classification tasks. Despite numerous efforts of the research community, novelty detection remains one of the significant limitations of neural networks. The ability to identify previously unseen inputs as novel is crucial for our understanding of the decisions made by neural networks. At runtime, inputs not falling into any of the categories learned during training cannot be classified correctly by the neural network. Existing approaches t… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
20
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(20 citation statements)
references
References 17 publications
0
20
0
Order By: Relevance
“…We contribute to the monitoring approaches based on geometrical shape abstraction. In particular, we extend the work in [19] to address some of its limitations. The approach in [19] only leverages the good reference behavior of the network.…”
Section: Approach and Contributionsmentioning
confidence: 99%
See 2 more Smart Citations
“…We contribute to the monitoring approaches based on geometrical shape abstraction. In particular, we extend the work in [19] to address some of its limitations. The approach in [19] only leverages the good reference behavior of the network.…”
Section: Approach and Contributionsmentioning
confidence: 99%
“…Safety-critical systems involving LECs like self-driving cars, extensively use data-based techniques, for which we do not have so far a theory allowing behavioural predictability. To favor scalability, some research efforts in the last few years have focused on using dynamic verification techniques such as testing [36,43,46,48,50] and runtime verification [6,19,32,5].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Since our initial workshop paper (Kang et al, 2018), several works have extended model assertions (Arechiga et al, 2019;Henzinger et al, 2019).…”
Section: Related Workmentioning
confidence: 99%
“…(Lu et al, 2017) distinguishes adversarial examples from clean data by the threshold of their values on each ReLU layer. (Henzinger et al, 2019) proposed to detect novel inputs by observing the hidden layers, i.e., whether they are outside the value ranges during training. Given the fact that these works are not open source and the results in their papers are often given in the form of graphs (like ROC curve), it is hard to have a fair comparison with their results.…”
Section: Related Workmentioning
confidence: 99%