2015
DOI: 10.1109/tr.2014.2370891
|View full text |Cite
|
Sign up to set email alerts
|

A Learning-to-Rank Approach to Software Defect Prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
69
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
3
3
3

Relationship

0
9

Authors

Journals

citations
Cited by 128 publications
(70 citation statements)
references
References 24 publications
1
69
0
Order By: Relevance
“…We provide a comprehensive evaluation and comparison of the LTR approach against more algorithms for constructing SDP models for the ranking task. In previous work LTR approach can be compared with many other methods [17]. Linear Learning-to-rank approach gives better result as compare to count models.…”
Section: Learning-to-rank Approachmentioning
confidence: 99%
“…We provide a comprehensive evaluation and comparison of the LTR approach against more algorithms for constructing SDP models for the ranking task. In previous work LTR approach can be compared with many other methods [17]. Linear Learning-to-rank approach gives better result as compare to count models.…”
Section: Learning-to-rank Approachmentioning
confidence: 99%
“…Recently, Canfora et al [8,9] proposed the application of multi-objective genetic algorithms to generate a set of classification models considering file size and recall as two objectives to optimize. Yang et al [38] used GAs to optimize the ranks of defect-prone software components predicted by simple Linear Regression model (LR) without taking into account the inspection cost.…”
Section: Training Regression Models With Gasmentioning
confidence: 99%
“…Specifically, previous papers use GAs for calibrating algorithms to predict the defect proneness as a binary outcome (i.e., defective or non-defective artifacts) while we use regression models that, by definition, predict continuous values (e.g., number of defects) as done in [11] when measuring the cost-effectiveness. The most important difference with respect to previous approaches is that they use traditional performance metrics for classification problem as fitness functions to optimize [13,19,23,38]. As explained in Section 2, traditional performance are not wellsuited to evaluate prediction since they give the same priority/importance to all defect prone software components, independently from their size (cost) and the number of bugs (effectiveness).…”
Section: Training Regression Models With Gasmentioning
confidence: 99%
“…Based on the investigation of historical metrics [1][2], defect prediction aims to detect the defect proneness of new software modules. Therefore, defect prediction is often used to help to reasonably allocate limited development and maintenance resources [3][4][5]. With the advent of big data era and the development of machine learning techniques [6], many machine learning algorithms are applied to solve the practical problems in life [7][8][9].…”
Section: Introductionmentioning
confidence: 99%