2021
DOI: 10.1145/3470006
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Fault Detection for Deep Learning Programs Using Graph Transformations

Abstract: Nowadays, we are witnessing an increasing demand in both corporates and academia for exploiting Deep Learning ( DL ) to solve complex real-world problems. A DL program encodes the network structure of a desirable DL model and the process by which the model learns from the training dataset. Like any software, a DL program can be faulty, which implies substantial challenges of software quality assurance, especially in safety-critical domains. It is therefore crucia… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 17 publications
(18 citation statements)
references
References 34 publications
0
18
0
Order By: Relevance
“…Then, it suggests the best practices to improve the quality of ML components. Nikanjam et al [52] provided an automatic fault detection tool for DL programs namely NeuraLint that validates the DL programs by detecting faults and design inefficiencies in the implemented models. They identified 23 different rules using graph transformations to detect various types of bugs in DL programs.…”
Section: Sre In Ml-based Systemsmentioning
confidence: 99%
See 4 more Smart Citations
“…Then, it suggests the best practices to improve the quality of ML components. Nikanjam et al [52] provided an automatic fault detection tool for DL programs namely NeuraLint that validates the DL programs by detecting faults and design inefficiencies in the implemented models. They identified 23 different rules using graph transformations to detect various types of bugs in DL programs.…”
Section: Sre In Ml-based Systemsmentioning
confidence: 99%
“…Our filters eliminated all their mentioned bugs and we could not achieve any bug from their dataset. Nikanjam et al [52] delivered a public dataset with their automatic bug detection tool including 34 real bugs in DL programs, 26 from SO and 8 from GitHub. After checking their provided bugs using mentioned exclusion criteria, we added 8 of GitHub bugs and 10 SO ones to the benchmark.…”
Section: Benchmarkmentioning
confidence: 99%
See 3 more Smart Citations