2021
DOI: 10.1109/tmm.2020.3016126
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Network With Multiple Classifiers for Open Set Domain Adaptation

Abstract: Domain adaptation aims to transfer knowledge from a domain with adequate labeled samples to a domain with scarce labeled samples. Prior research has introduced various open set domain adaptation settings in the literature to extend the applications of domain adaptation methods in realworld scenarios. This paper focuses on the type of open set domain adaptation setting where the target domain has both private ('unknown classes') label space and the shared ('known classes') label space. However, the source domai… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 63 publications
(7 citation statements)
references
References 44 publications
0
6
0
Order By: Relevance
“…Previous Domain Adaptation (DA) approaches often design a number of losses to measure the discrepancy between source and target domains (Saito et al, 2018;Long et al, 2016;Tzeng et al, 2014;Zellinger et al, 2017;Roy et al, 2019). Then, adversarial training is applied in DA (Mathieu et al, 2016;Huang et al, 2018;Shermin et al, 2020;Tang & Jia, 2020). In the problem settings of these works, the supervision knowledge reflected in the source labels is accurate and reliable, which is not the case in our problem (where weak annotator generates inaccurate labels).…”
Section: Domain Adaptationmentioning
confidence: 99%
“…Previous Domain Adaptation (DA) approaches often design a number of losses to measure the discrepancy between source and target domains (Saito et al, 2018;Long et al, 2016;Tzeng et al, 2014;Zellinger et al, 2017;Roy et al, 2019). Then, adversarial training is applied in DA (Mathieu et al, 2016;Huang et al, 2018;Shermin et al, 2020;Tang & Jia, 2020). In the problem settings of these works, the supervision knowledge reflected in the source labels is accurate and reliable, which is not the case in our problem (where weak annotator generates inaccurate labels).…”
Section: Domain Adaptationmentioning
confidence: 99%
“…Unsupervised domain adaptation (UDA) aims at bridging the gap between an annotated source domain and an unannotated target domain. Typically, UDA methods transfer the knowledge in four ways: 1) directly minimizing the statistical distribution distance between two domains [22], [23]; 2) inducing the domain-invariant feature generation [9], [10], [24]; 3) learning from the synthesized images [25]- [29], and 4) selftraining via pseudo labels [30], [31]. By alleviating the crossdomain discrepancy at the feature and appearance levels, UDA methods have achieved outstanding performance in crossdomain classification [9], [10], [27], segmentation [32]- [34], and detection [11], [13], [14], [30], [35], [36].…”
Section: Related Work a Unsupervised Domain Adaptationmentioning
confidence: 99%
“…They proposed a basic framework that leverages labeled base data and a saliency model to discover novel classes in unlabeled images through clustering. Shermin et al [24] introduced multiple classifiers within an adversarial framework to enhance domain adaptation in scenarios where target domain classes are incompletely known. These works contribute to the field of NCD by exploring different aspects such as image classification, 3D semantic segmentation, contrastive learning, and leveraging prior knowledge.…”
Section: Novel Class Discoverymentioning
confidence: 99%