2020
DOI: 10.1609/aaai.v34i04.6091
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Domain Adaptation via Structured Prediction Based Selective Pseudo-Labeling

Abstract: Unsupervised domain adaptation aims to address the problem of classifying unlabeled samples from the target domain whilst labeled samples are only available from the source domain and the data distributions are different in these two domains. As a result, classifiers trained from labeled samples in the source domain suffer from significant performance drop when directly applied to the samples from the target domain. To address this issue, different approaches have been proposed to learn domain-invariant featur… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
150
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 183 publications
(150 citation statements)
references
References 19 publications
0
150
0
Order By: Relevance
“…MASF [Pattern Recognit.2020] employs an L2 constraint combining sparse filtering to learn both domain-shared and discriminative representations. Selective pseudo-labeling (SPL): [ 36 ]. SPL [AAAI2020] is also a selective pseudo labeling strategy based on structured prediction.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…MASF [Pattern Recognit.2020] employs an L2 constraint combining sparse filtering to learn both domain-shared and discriminative representations. Selective pseudo-labeling (SPL): [ 36 ]. SPL [AAAI2020] is also a selective pseudo labeling strategy based on structured prediction.…”
Section: Methodsmentioning
confidence: 99%
“…Selective pseudo-labeling (SPL): [ 36 ]. SPL [AAAI2020] is also a selective pseudo labeling strategy based on structured prediction.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The high dimensional feature space always contain noise information or redundant information. To address this, we adopt PCA to reduce the dimension of the feature first and obtain more robust feature compared with original feature data as in [Wang and Breckon, 2019]. We concatenated the features from the visual representation of 2D images and 3D objects as a matrix…”
Section: Subspace Generationmentioning
confidence: 99%
“…We first initialize the cluster center of the target domain by Eq.(4). Then, to make the cluster centroid of the target domain close to the class prototype of the source domain, we define the optimization problem as in [Wang and Breckon, 2019]:…”
Section: Intrinsic Structure Of the Source Domainmentioning
confidence: 99%