2022
DOI: 10.1007/s11704-021-1013-5
|View full text |Cite
|
Sign up to set email alerts
|

Effort-aware cross-project just-in-time defect prediction framework for mobile apps

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(9 citation statements)
references
References 43 publications
0
9
0
Order By: Relevance
“…They used adversarial learning and experimented on 14 Android applications. This model gives better results when compared with 20 others [21].…”
Section: Background and Related Workmentioning
confidence: 74%
“…They used adversarial learning and experimented on 14 Android applications. This model gives better results when compared with 20 others [21].…”
Section: Background and Related Workmentioning
confidence: 74%
“…Most previous EADP studies [13,[16][17][18][19][20][21] usually built predictive models on historical labelled software modules and then predicted the defect-proneness of unlabelled modules in the same project, referred to as within-project EADP. However, in actual software development scenarios, it is hard to obtain a large amount of historical data from the same project, especially for newly developed software [22][23][24][25][26][27]. Therefore, Ni et al [28] proposed an effort-aware cross-project defect prediction (EACPDP) method called EASC in their IEEE transactions on software engineering (TSE) paper titled 'Revisiting supervised and unsupervised methods for effort-aware cross-project defect prediction'.…”
Section: Motivationsmentioning
confidence: 99%
“…Carka et al [67] proposed to evaluate the EADP performance using normalised PofB, which ranked software modules based on predicted defect densities. Zhao et al [21], Xu et al [68] and Cheng et al [22] proposed three JIT EADP methods for Android applications. Ni et al [28] proposed a supervised EACPDP method called EASC.…”
Section: Effort-aware Defect Predictionmentioning
confidence: 99%
See 1 more Smart Citation
“…Their empirical research results showed that (1) in terms of machine learning algorithms, the classification performance of Naive Bayes is better than that of logistic regression, decision table, and support vector machine; (2) in terms of ensemble methods, boosting is better than random forests, voting and bootstrap aggregating. Recently, Cheng et al proposed the KAL 23 method for effort-aware JIT defect prediction, they first transformed original features into a high-dimensional feature space and adopted an adversarial learning technique to extract the common features. Experimental results showed that the KAL method achieved better performance than other comparative methods.…”
mentioning
confidence: 99%