2020
DOI: 10.1109/access.2020.2974527
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Learning for Cross-Project Semi-Supervised Defect Prediction

Abstract: Cross-project defect prediction (CPDP) aims to build a prediction model on existing source projects and predict the labels of target project. The data distribution difference between different projects makes CPDP very challenging. Besides, most existing CPDP methods usually require sufficient and labeled data. However, acquiring lots of labeled data for a new project is difficult while obtaining the unlabeled data is relatively easy. A desirable approach is building a prediction model on unlabeled data and lab… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
16
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 14 publications
(16 citation statements)
references
References 66 publications
0
16
0
Order By: Relevance
“…Under such restrictions, a robust feature extractor can be constructed with transfer learning [10][11][12], which tries to discover shared data features that are invariant across subjects and tasks. In particular, promising results were demonstrated for transfer learning by censoring nuisance features via adversarial training [13][14][15][16][17][18][19]. These works use adversarial methods to learn universal features shared by an attribute group, where a discriminative unit distinguishes shared features with respect to the different attributes adversarially to the feature extractor.…”
Section: Introductionmentioning
confidence: 99%
“…Under such restrictions, a robust feature extractor can be constructed with transfer learning [10][11][12], which tries to discover shared data features that are invariant across subjects and tasks. In particular, promising results were demonstrated for transfer learning by censoring nuisance features via adversarial training [13][14][15][16][17][18][19]. These works use adversarial methods to learn universal features shared by an attribute group, where a discriminative unit distinguishes shared features with respect to the different attributes adversarially to the feature extractor.…”
Section: Introductionmentioning
confidence: 99%
“…CKSDL combines cost-sensitive technique and kernel technique to further improve the prediction model. Then, Sun et al [18] introduced adversarial learning into CPDP and embedded a triplet sampling strategy into an adversarial learning framework.…”
Section: Cross-project Defect Predictionmentioning
confidence: 99%
“…D-cAE and D-cRAE represent cAE with hard-split and soft-split bottleneck variables respectively linked to the nuisance network only. DA-cAE and DA-cRAE specify hard-split and soft-split representations connected to both adversary and nuisance networks respectively with decoder conditioned on s. Note that the A-cAE resembles to the traditional adversarial learning methods presented in [20][21][22][23] where only one adversarial unit is adopted.…”
Section: E Model Implementationsmentioning
confidence: 99%
“…Addressing biosignal datasets collected from a narrow amount of subjects, transfer learning methods [11][12][13][14] are applied to build strong feature learning machines to extract robust and invariant features across various tasks and/or unknown subjects. Particularly, adversarial transfer learning [15][16][17][18][19][20][21][22][23] demonstrated impressive results in constructing such discriminative feature extractors. Traditional adversarial transfer learning works aim to extract latent representations universally shared by a group of attributes using adversarial inference, where a discriminative network is trained adversarially towards the feature extractor in order to differentiate universal features from various attributes.…”
mentioning
confidence: 99%