2022
DOI: 10.48550/arxiv.2201.05077
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Black-box Safety Analysis and Retraining of DNNs based on Feature Extraction and Clustering

Mohammed Oualid Attaoui,
Hazem Fahmy,
Fabrizio Pastore
et al.

Abstract: Deep neural networks (DNNs) have demonstrated superior performance over classical machine learning to support many features in safety-critical systems. Although DNNs are now widely used in such systems (e.g., self driving cars), there is limited progress regarding automated support for functional safety analysis in DNN-based systems. For example, the identification of root causes of errors, to enable both risk analysis and DNN retraining, remains an open problem. In this paper, we propose SAFE, a black-box app… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 43 publications
(65 reference statements)
0
3
0
Order By: Relevance
“…According to Chan et al [44] for traditional software testing, failure-causing inputs tend to be dense and close together. The same insight applies to DNN model testing, since similar mispredicted inputs tend to be due to the same fault [16,43]. Our goal is to devise a test selection method that is not only capable of detecting more mispredicted inputs but also diverse ones in terms of root causes.…”
Section: Estimating Faults In Dnnsmentioning
confidence: 96%
See 2 more Smart Citations
“…According to Chan et al [44] for traditional software testing, failure-causing inputs tend to be dense and close together. The same insight applies to DNN model testing, since similar mispredicted inputs tend to be due to the same fault [16,43]. Our goal is to devise a test selection method that is not only capable of detecting more mispredicted inputs but also diverse ones in terms of root causes.…”
Section: Estimating Faults In Dnnsmentioning
confidence: 96%
“…Hence, many papers rely on mispredictions [6,9,20] for test selection evaluation. However, similar to failures in traditional software systems, many mispredicted inputs can be due to the same faults in the DNN model and are therefore redundant [27,43]. When selecting inputs on a limited budget, we should therefore avoid similar or redundant mispredictions as they do not help reveal additional root causes or faults in DNN models.…”
Section: Research Questionsmentioning
confidence: 99%
See 1 more Smart Citation