2021
DOI: 10.1109/access.2021.3108451
|View full text |Cite
|
Sign up to set email alerts
|

Out-of-Distribution Detection for Deep Neural Networks With Isolation Forest and Local Outlier Factor

Abstract: Deep Neural Networks (DNNs) are extensively deployed in today's safety-critical autonomous systems thanks to their high performance. However, they are known to make mistakes unpredictably, e.g., a DNN may misclassify an object if it is used for perception, or issue unsafe control commands if it is used for planning and control. One common cause for such unpredictable mistakes is Out-of-Distribution (OOD) inputs, i.e., test inputs that fall outside of the distribution of the training dataset. In this paper, we … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 21 publications
(14 citation statements)
references
References 21 publications
0
10
0
Order By: Relevance
“…To discuss state of the art monitoring methods in deep learning, we define a DNN as a high-dimensional function f θ , which maps the input data x to output values in form of, e.g., probability scores for different classes. This mapping depends on the DNN's learned parameters θ from training data distribution (i.e., in distribution) [23]. However, the DNN probability scores are often overconfident and do not guarantee error prediction [2], [19].…”
Section: Runtime Monitoring Methods For Deep Neural Networkmentioning
confidence: 99%
See 2 more Smart Citations
“…To discuss state of the art monitoring methods in deep learning, we define a DNN as a high-dimensional function f θ , which maps the input data x to output values in form of, e.g., probability scores for different classes. This mapping depends on the DNN's learned parameters θ from training data distribution (i.e., in distribution) [23]. However, the DNN probability scores are often overconfident and do not guarantee error prediction [2], [19].…”
Section: Runtime Monitoring Methods For Deep Neural Networkmentioning
confidence: 99%
“…For example, out of distribution (OOD) detection methods focus on the detection of input data outside the training data distribution [19]. They deal with a binary classification problem, whether the input is in distribution (ID) or OOD to prevent a DNN error on input data that have never been seen during training time [23]. Adversarial detection methods address intentionally modified input data to fool the DNN (i.e., adversarial attacks [24]), which are often very close to the training data distribution with minimal targeted modifications [22].…”
Section: Runtime Monitoring Methods For Deep Neural Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…An OOD detector [24] is a binary classifier that predicts an InD or OOD label to an input sample. OOD detection is an active research area with a wide range of algorithms and techniques [9], including our previous work [35] on using Isolation Forest (IF) or Local Outlier Factor (LOF) for outlier detection in one or more hidden layers of a CNN.…”
Section: Out-of-distribution (Ood) Detectionmentioning
confidence: 99%
“…Each normal CAN image contains 48 normal CAN messages, and each KA or UA CAN image contains 48 CAN messages with at least one attack message of the specific attack type (this corresponds to attack threshold of 1 in GIDS [42]). Inspired by the common method of emulating OOD samples in research works on OOD detection [9,35], we emulate UA samples by samples of one KA that are excluded from the training dataset. Suppose we have 3 KAs (Fuzzing, RPM, GEAR) and 1 UA (DoS).…”
Section: Dataset Constructionmentioning
confidence: 99%