Proceedings of the 2018 International Conference on Technical Debt 2018
DOI: 10.1145/3194164.3194173
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating domain-specific metric thresholds

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
14
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 25 publications
(15 citation statements)
references
References 24 publications
1
14
0
Order By: Relevance
“…The authors emphasised the need to pay a special attention to software categories when comparing systems in distinct categories with predefined thresholds already available in the literature. For example, empirical results from the study revealed that out of ten distinct categories selected (including Audio and Video, Graphics, Recently, Mori et al [67] argued that while deriving reliable thresholds for software metrics has been a ongoing research concern, there is still a lack of evidence about threshold variation across different software domains. In an attempt to address this limitation, the authors whether and how thresholds vary across domains in a study on 3,107 software systems from 15 domains In general, earlier studies have shown that application domains indeed have an effect on software metric values and thresholds.…”
Section: Related Workmentioning
confidence: 99%
“…The authors emphasised the need to pay a special attention to software categories when comparing systems in distinct categories with predefined thresholds already available in the literature. For example, empirical results from the study revealed that out of ten distinct categories selected (including Audio and Video, Graphics, Recently, Mori et al [67] argued that while deriving reliable thresholds for software metrics has been a ongoing research concern, there is still a lack of evidence about threshold variation across different software domains. In an attempt to address this limitation, the authors whether and how thresholds vary across domains in a study on 3,107 software systems from 15 domains In general, earlier studies have shown that application domains indeed have an effect on software metric values and thresholds.…”
Section: Related Workmentioning
confidence: 99%
“…Even more, the results show that software metric values follow the same probability distribution regardless of the application domain. Mori et al [28] also analyzed the impact of domains on derived thresholds. In contrast to the study described above, they found evidence that software metrics thresholds are sensitive to the software domain, but we can still find domains that have similar thresholds for some of the analyzed software metrics [28].…”
Section: Studymentioning
confidence: 99%
“…Number of software projects Alves et al [1] 100 Arar and Ayan [2] 10 Ferreira et al [10] 40 Filo et al [12] 111 (from Qualitas Corpus [40]) Fontana et al [13] 74 (from Qualitas Corpus [40]) Mori et al [28] 3,107 Oliveira et al [30] 106 (from Qualitas Corpus [40]) Yamashita et al [44] 4,780 Vale et al [42] 103 (from Qualitas Corpus [40]) [10] and Arar and Ayan [2] used a smaller benchmark, including 40 and 10 projects, respectively.…”
Section: Studymentioning
confidence: 99%
“…The proposed technique was more effective in predicting bad smells than Alves's technique. Mori et al [30 ] have studied a benchmark of large data set composed of >3000 systems coming from 15 different domains. The study derives thresholds by ranking entities and choosing the top ranks (90 and 95%) as candidate thresholds.…”
Section: Related Workmentioning
confidence: 99%