2004
DOI: 10.1023/b:emse.0000027781.18360.9b
|View full text |Cite
|
Sign up to set email alerts
|

Comparative Assessment of Software Quality Classification Techniques: An Empirical Case Study

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
85
1

Year Published

2007
2007
2021
2021

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 147 publications
(89 citation statements)
references
References 32 publications
3
85
1
Order By: Relevance
“…It was a case study of quality modeling for a very large telecommunications system. Two other publications of Khoshgoftaar and Seliya from 2004 [15] and 2005 [16] continued with the previous concept and focused on commercial data analysis, but were not applied to a real-world environment. A similar approach can be found in publications by Ostrand and Weyuker [29], Ostrand et al [31], Tosun et al [45], Turhan et al [47,48].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…It was a case study of quality modeling for a very large telecommunications system. Two other publications of Khoshgoftaar and Seliya from 2004 [15] and 2005 [16] continued with the previous concept and focused on commercial data analysis, but were not applied to a real-world environment. A similar approach can be found in publications by Ostrand and Weyuker [29], Ostrand et al [31], Tosun et al [45], Turhan et al [47,48].…”
Section: Related Workmentioning
confidence: 99%
“…To follow our initial approach, we used the basic mechanisms built into DePress/KNIME by constructing a workflow as shown in 15 Cost Effectiveness of Software Defect ... Figure 2. First, the dataset is split into two parts, by classifying rows of two different sets, depending on the objective variable value ("1" or "0"), using the Row Splitter KNIME node.…”
Section: Objective Variable and Class Imbalance Counteractionmentioning
confidence: 99%
“…Size metrics and complexity metrics are known as especially useful metrics for assessing the quality of the programs and for detecting poor-quality programs. This is because programs that are too large and/or too complex are often problematic, i.e., they are likely to have more potential faults or to be costly in their maintenance phases [6]- [11]. Such problematic programs should be carefully reviewed and tested as early as possible.…”
Section: Introductionmentioning
confidence: 99%
“…The rest of the papers consists of 12 case studies [20][21][22][23][24][25][26][27][28][29][30][31], 11 reviews [32][33][34][35][36][37][38][39][40][41][42], 11 empirical analyzes [43][44][45][46][47][48][49][50][51][52][53], three comparative analyzes ( [54][55][56]) and two field studies [57,58].…”
Section: Introductionmentioning
confidence: 99%