2013 28th IEEE/ACM International Conference on Automated Software Engineering (ASE) 2013
DOI: 10.1109/ase.2013.6693087
|View full text |Cite
|
Sign up to set email alerts
|

Personalized defect prediction

Abstract: Abstract-Many defect prediction techniques have been proposed. While they often take the author of the code into consideration, none of these techniques build a separate prediction model for each developer. Different developers have different coding styles, commit frequencies, and experience levels, causing different defect patterns. When the defects of different developers are combined, such differences are obscured, hurting prediction performance.This paper proposes personalized defect prediction-building a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
168
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 211 publications
(169 citation statements)
references
References 57 publications
1
168
0
Order By: Relevance
“…After we generate the advanced features, our framework next constructs a classifier (i.e., a statistical model) based on the advanced features of the training changes (Step 4). 3 In this paper, we use logistic regression [18] to build the classifier.…”
Section: Our Proposed Approachmentioning
confidence: 99%
See 2 more Smart Citations
“…After we generate the advanced features, our framework next constructs a classifier (i.e., a statistical model) based on the advanced features of the training changes (Step 4). 3 In this paper, we use logistic regression [18] to build the classifier.…”
Section: Our Proposed Approachmentioning
confidence: 99%
“…Defect prediction techniques are proposed to help prioritize software testing and debugging; they can recommend software components that are likely to be defective to developers. Much research has been done on defect prediction; these techniques construct predictive classification models built on features such as lines of code, code complexity and number of modified files [2], [3], [4]. Prior studies mainly focus on predicting defects at coarse granularity level, such as file, package, or module [4], [5], [6].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Previous work proposed performance metrics (e.g., AUC-CE and P effort ) designed for evaluating the cost-effectiveness of prediction models [11,34,17,32,33,28]. However, the models were still built using traditional training algorithms; for example, D'Ambros et al [11] trained traditional linear regression models using the classical iteratively re-weighted least square algorithm; Rahman and Devanbu [33] used four different machine learning techniques (i.e., Logistic Regression, J48, SVM, and Naive Bayes) that were trained using the corresponding classical training algorithms.…”
Section: Previous Workmentioning
confidence: 99%
“…Examples of algorithms are logistic regression used by Zimmermann et al [14] to predict the defect proneness of classes using complexity, interaction and change metrics as predictors; Multi-Layer Perceptron (MLP), radial basis function (RBF), k-nearest neighbor (KNN), regression tree (RT), dynamic evolving neuro-fuzzy inference system (DENFIS), and Support Vector Regression (SVR) used by Elish [15] for defect prediction; Bayesian networks used by Bechta [16]; and Naive Bayes, J48, Alternative Decision Tree (ADTree), and One-R considered by Nelson et al [17]. Recently, other researchers have proposed further advanced machine learning techniques for defect prediction, such as the Multivariate Adaptive Regression Splines (MARS) [18], Personalized Change Classification (PCC) [19], Logistic Model Trees (LMT) [20], ensemble learning [21], clustering algorithms [22], and combined techniques [23]. Interestingly, Lessman et al [24] evaluated 22 classification models and showed that there is no statistical differences between the top-17 models when classifying software modules as defect prone.…”
Section: Background and Problem Descriptionmentioning
confidence: 99%