2020
DOI: 10.1109/tr.2019.2931559
|View full text |Cite
|
Sign up to set email alerts
|

Improving Ranking-Oriented Defect Prediction Using a Cost-Sensitive Ranking SVM

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
8
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 50 publications
(8 citation statements)
references
References 80 publications
0
8
0
Order By: Relevance
“…In our empirical study, we use the three threshold-dependent evaluation metrics (Precision, Recall, and F-measure (F1)) and one threshold-independent evaluation metric (Matthews correlation coefficient, MCC) to evaluate the performance of CSD models. The metrics are widely used in both software engineering studies [64][65][66][67][68][69][70][71] and artificial intelligence researches. [72][73][74][75] In the binary classification problem, these four evaluation metrics can be calculated according to a confusion matrix, as shown in Table 4.…”
Section: Performance Measuresmentioning
confidence: 99%
“…In our empirical study, we use the three threshold-dependent evaluation metrics (Precision, Recall, and F-measure (F1)) and one threshold-independent evaluation metric (Matthews correlation coefficient, MCC) to evaluate the performance of CSD models. The metrics are widely used in both software engineering studies [64][65][66][67][68][69][70][71] and artificial intelligence researches. [72][73][74][75] In the binary classification problem, these four evaluation metrics can be calculated according to a confusion matrix, as shown in Table 4.…”
Section: Performance Measuresmentioning
confidence: 99%
“…Consequently, ranking modules in Project D can be more helpful than predicting whether or not a module is defective in the absence of testing resources. Since this CPDP approach employs the learning to rank technique to build models, we call it ranking-oriented cross-project defect prediction (ROCPDP) [26].…”
Section: Rocpdpmentioning
confidence: 99%
“…According to the analysis of practical scientific computing concepts, there are many calculation methods available for content recommendation, but due to the differences in specific schemes and concerns, it gradually presents a mixed development state in the development of practice. This paper mainly studies the common dimensions of content recommendation algorithm from a coarse-grained perspective, as shown in Figure 4 below [15]: Combined with the above structure and the analysis of the calculation process with TF-IDF as the core, it can be seen that: Firstly, N targets waiting for recommendation should be formed into an overall vector form, as shown below:…”
Section: Cascading Hybrid Algorithmmentioning
confidence: 99%