Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of 2019
DOI: 10.1145/3338906.3341462
|View full text |Cite
|
Sign up to set email alerts
|

A taxonomy of metrics for software fault prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 13 publications
0
4
0
Order By: Relevance
“…This is particularly true for software metrics, which often cannot be compared directly, because due to their different scales and distributions, there does not exist a mathematically sound way to do so (Ulan et al, 2018). While some have attempted to associate blank software metrics with quality (e.g., Basili et al, 1996), most often applications have to resort to using software metrics as, e.g., fault indicators (Aziz et al, 2019;Caulo, 2019), or as indicators of reliability and complexity (Chidamber & Kemerer, 1994). Furthermore, none of the existing approaches that attempted to associate software metrics with quality paid great attention to the fact that software metrics have different distributions and, therefore, different statistical properties across application domains.…”
Section: Statement Of Needmentioning
confidence: 99%
“…This is particularly true for software metrics, which often cannot be compared directly, because due to their different scales and distributions, there does not exist a mathematically sound way to do so (Ulan et al, 2018). While some have attempted to associate blank software metrics with quality (e.g., Basili et al, 1996), most often applications have to resort to using software metrics as, e.g., fault indicators (Aziz et al, 2019;Caulo, 2019), or as indicators of reliability and complexity (Chidamber & Kemerer, 1994). Furthermore, none of the existing approaches that attempted to associate software metrics with quality paid great attention to the fact that software metrics have different distributions and, therefore, different statistical properties across application domains.…”
Section: Statement Of Needmentioning
confidence: 99%
“…During the decades of the static analysis of code, many metrics were engineered for measuring such structural properties as code complexity [24] or maintainability [9]. The most exhaustive set of metrics was presented in the work of Caulo et al [7] -the authors collected over 300 different metrics for code evaluation. While some of these metrics are not applicable for Python, there are several works that focused specifically on evaluating Python code.…”
Section: Structural Studies On Codementioning
confidence: 99%
“…Firstly, we analyzed the existing papers about structural metrics in software engineering [15,32], most notably, the work of Caulo et al that presented the list of 300 such metrics [7]. Since most of the structural metrics were originally created for analysing objectoriented programming code, it is an additional challenge to apply these metrics to notebooks, where classes are rarely used [30].…”
Section: Structural Metricsmentioning
confidence: 99%
See 1 more Smart Citation