Abstract-Cross-company defect prediction (CCDP) is a practical way that trains a prediction model by exploiting one or multiple projects of a source company and then applies the model to target company. Unfortunately, larger irrelevant crosscompany (CC) data usually makes it difficult to build a prediction model with high performance. On the other hand, brute force leveraging of CC data poorly related to withincompany (WC) data may decrease the prediction model performance. To address such issues, this paper introduces Multi-Source TrAdaBoost algorithm, an effective transfer learning approach to perform CCDP. The core idea of our approach is that: 1) employ limited amount of labeled WC data to weaken the impact of irrelevant CC data; 2) import knowledge not from one but from multiple sources to avoid negative transfer. The experimental results indicate that: 1) our proposed approach achieves the best overall performance among all tested CCDP approaches; 2) only 10% labeled WC data is enough to achieve good performance of CCDP by using our proposed approach.
Abstract-Cross-company defect prediction (CCDP) is a practical way that trains a prediction model by exploiting one or multiple projects of a source company and then applies the model to target company. Unfortunately, larger irrelevant crosscompany (CC) data usually makes it difficult to build a prediction model with high performance. On the other hand, the CC data has the highly imbalanced nature between the defectiveprone and non-defective classes, which will degrade the performance of CCDP. To address such issues, this paper proposes an approach, in which data sampling is combined with data filter, to overcome these problems. Data sampling seeks a more balanced dataset through the addition or removal of instances, while data filter is a process of filtering out the irrelevant CC data so that the performance of CCDP models can be improved. We employ two data filtering methods called NN filter and DBSCAN filter combined with SMOTE (Synthetic Minority Oversampling Technique) and RUS (Random UnderSampling). Eight different approaches would be produced when combing these four techniques: 1-NN filter performed prior to RUS; 2-NN filter performed after RUS; 3-NN filter performed prior to SMOTE; 4-NN filter performed after SMOTE; 5-DBSCAN filter performed prior to RUS; 6-DBSCAN filter performed after RUS; 7-DBSCAN filter performed prior to SMOTE; 8-DBSCAN filter performed after SMOTE. The empirical study was carried out on 15 publicly available project datasets. The experimental results demonstrate that NN filter performed prior to RUS (Approach 1) performs better than the other seven approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.