2010 International Symposium on Information Technology 2010
DOI: 10.1109/itsim.2010.5561516
|View full text |Cite
|
Sign up to set email alerts
|

Establishing a defect prediction model using a combination of product metrics as predictors via Six Sigma methodology

Abstract: Defect prediction is an important aspect of the Product Development Life Cycle. The rationale in knowing predicted number of functional defects earlier on in the lifecycle, rather than to just find as many defects as possible during testing phase is to determine when to stop testing and ensure all the in-phase defects have been found in-phase before a product is delivered to the intended end user. It also ensures that wider test coverage is put in place to discover the predicted defects. This research is aimed… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2012
2012
2012
2012

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 1 publication
0
1
0
Order By: Relevance
“…In defect prediction literature, there are many defect prediction algorithms studied like regression [43] [10] [40], rule induction [40], decision tree approaches like C4.5 [42], case-based reasoning (CBR) [23] [22] [40], artificial neural networks [24] [44] [21] [40], linear discriminant analysis [31], k-nearest neighbour [6], k-star [25], Bayesian networks [12] [35] [46] and support vector machine based classifiers [26] [19] [20] [41]. According to the no free lunch theorem [45], there is no algorithm which is better than other algorithms on all data sets.…”
Section: Introductionmentioning
confidence: 99%
“…In defect prediction literature, there are many defect prediction algorithms studied like regression [43] [10] [40], rule induction [40], decision tree approaches like C4.5 [42], case-based reasoning (CBR) [23] [22] [40], artificial neural networks [24] [44] [21] [40], linear discriminant analysis [31], k-nearest neighbour [6], k-star [25], Bayesian networks [12] [35] [46] and support vector machine based classifiers [26] [19] [20] [41]. According to the no free lunch theorem [45], there is no algorithm which is better than other algorithms on all data sets.…”
Section: Introductionmentioning
confidence: 99%