2022
DOI: 10.48550/arxiv.2203.04026
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Toward Understanding Deep Learning Framework Bugs

Abstract: DL frameworks are the basis of constructing all DL programs and models, and thus their bugs could lead to the unexpected behaviors of any DL program or model relying on them. Such wide effect demonstrates the necessity and importance of guaranteeing DL frameworks' quality. Understanding the characteristics of DL framework bugs is a fundamental step for this quality assurance task, facilitating to design effective bug detection and debugging approaches. Hence, in this work we conduct the most large-scale study … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 35 publications
0
5
0
Order By: Relevance
“…(2) In PyTorch, every indicator of MPL bug fixes is significantly greater than that of SPL bug fixes. (3) In TensorFlow, the LOCM, NOFM, and Entropy of MPL bug fixes are significantly larger than those of SPL bug fixes, respectively, while there are no significant differences between MPL bug fixes and SPL bug fixes on OT, NODP, and NOC.…”
Section: Impact Of the Use Of Multiple Pls On Bug Fixing (Rq4)mentioning
confidence: 89%
“…(2) In PyTorch, every indicator of MPL bug fixes is significantly greater than that of SPL bug fixes. (3) In TensorFlow, the LOCM, NOFM, and Entropy of MPL bug fixes are significantly larger than those of SPL bug fixes, respectively, while there are no significant differences between MPL bug fixes and SPL bug fixes on OT, NODP, and NOC.…”
Section: Impact Of the Use Of Multiple Pls On Bug Fixing (Rq4)mentioning
confidence: 89%
“…We also use Keras [11] to generate DL models for a fair comparison with LEMON [54] and Muffin [18]. Following existing works [18,19], we take the Keras documentation 6 as the reference to define the mutation space. In particular, we collect the information of all 59 Keras layer APIs (i.e., 𝑁 π‘™π‘Žπ‘¦π‘’π‘Ÿ = 59), including their possible datatypes, required input dimensions, and possible parameter values.…”
Section: Experiments Setupmentioning
confidence: 99%
“…Table 6 presents the coverage results achieved by the baselines. Since collecting DL library coverage is costly and testing practices and bugs inside different DL libraries share a significant commonality [6,22,34,58], we used the branch and line coverage on TensorFlow's model construction and model execution modules as the representative for evaluating each technique's test coverage. According to Table 6, COMET outperforms the baselines on all five coverage criteria.…”
Section: Rq1: Comparison With Baselinesmentioning
confidence: 99%
See 2 more Smart Citations