Cross-project defect prediction (CPDP) aims to build a prediction model on existing source projects and predict the labels of target project. The data distribution difference between different projects makes CPDP very challenging. Besides, most existing CPDP methods usually require sufficient and labeled data. However, acquiring lots of labeled data for a new project is difficult while obtaining the unlabeled data is relatively easy. A desirable approach is building a prediction model on unlabeled data and labeled data. CPDP in this scenario is called cross-project semi-supervised defect prediction (CSDP). Recently, generative adversarial networks have achieved impressive results with these strong ability of learning data distribution and discriminative representation. For effectively learning the discriminative features of data from different projects, we propose a Discriminative Adversarial Feature Learning (DAFL) approach for CSDP. DAFL consists of feature transformer and project discriminator, which compete with each other. A feature transformer tries to generate feature representation, which learns the discriminant information and preserves intrinsic structure inferred from both labeled and unlabeled data. A project discriminator tries to discriminate source and target instances on the generated representation. Experiments on 16 projects show that DAFL performs significantly better than baselines. INDEX TERMS Cross-project defect prediction, adversarial learning, semi-supervised learning.
Cross-project defect prediction (CPDP) is a feasible solution to build an accurate prediction model without enough historical data. Although existing methods for CPDP that use only labeled data to build the prediction model achieve great results, there are much room left to further improve on prediction performance. In this paper we propose a Semi-Supervised Discriminative Feature Learning (SSDFL) approach for CPDP. SSDFL first transfers knowledge of source and target data into the common space by using a fully-connected neural network to mine potential similarities of source and target data. Next, we reduce the differences of both marginal distributions and conditional distributions between mapped source and target data. We also introduce the discriminative feature learning to make full use of label information, which is that the instances from the same class are close to each other and the instances from different classes are distant from each other. Extensive experiments are conducted on 10 projects from AEEEM and NASA datasets, and the experimental results indicate that our approach obtains better prediction performance than baselines.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.