2022 International Joint Conference on Neural Networks (IJCNN) 2022
DOI: 10.1109/ijcnn55064.2022.9892322
|View full text |Cite
|
Sign up to set email alerts
|

Contrastive Learning and Self-Training for Unsupervised Domain Adaptation in Semantic Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
14
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 23 publications
(14 citation statements)
references
References 30 publications
0
14
0
Order By: Relevance
“…Unsupervised Domain Adaptation: Methods of unsupervised domain adaptation try to overcome the delta between the training data domain, the so-called source domain, and the target domain, which is unlabeled but relevant. These methods can be categorized into three types: Distribution alignment of the source and target domain data in the input (e.g., Hoffman et al, 6 Yunsheng et al 7 and Yang et al 8 ), feature (e.g., Niemeijer et al, 9 Hoffman et al 10 and Marsden et al 11 ) and, output space (e.g., Vu et al, 12 Tsai et al 13 and Zheng et al 14 ). Distribution alignment in the input space often relies on CycleGANs, which compute a style transformation from the source to the target images domain.…”
Section: Related Workmentioning
confidence: 99%
“…Unsupervised Domain Adaptation: Methods of unsupervised domain adaptation try to overcome the delta between the training data domain, the so-called source domain, and the target domain, which is unlabeled but relevant. These methods can be categorized into three types: Distribution alignment of the source and target domain data in the input (e.g., Hoffman et al, 6 Yunsheng et al 7 and Yang et al 8 ), feature (e.g., Niemeijer et al, 9 Hoffman et al 10 and Marsden et al 11 ) and, output space (e.g., Vu et al, 12 Tsai et al 13 and Zheng et al 14 ). Distribution alignment in the input space often relies on CycleGANs, which compute a style transformation from the source to the target images domain.…”
Section: Related Workmentioning
confidence: 99%
“…[18,21,43] employ partially dense contrast between classes using pixel features as anchors and class-level prototype vectors as positives and negatives. Along similar lines, [22,43] implement partially dense contrast between classes using pixel features as anchors and estimated classlevel distributions as positives and negatives, while [28] use class prototypes both as anchors and as positives/negatives. These approaches are prone to false positive/negative samples which contaminate the contrastive loss due to potential errors in the target-domain pseudo-labels, which are used both to determine the anchors and to compute the class prototypes that serve as positives and negatives.…”
Section: Related Workmentioning
confidence: 99%
“…This is especially true for the synthetic to real world domain change. Generally speaking, UDA methods aim to align the distributions in the input (e.g., Hoffman et al [7], Yunsheng et al [12], Termöhlen et al [27] and Yang et al [30]), the feature (e.g., Niemeijer et al [17,16], Hoffman et al [8] and Marsden et al [13]), or the output space (e.g., Vu et al [29], Tsai et al [28] and Zheng et al [31]) of a neural network. The recent state of the art is dominated by transformer based architectures that utilize adaptation techniques in the feature and output space as, for instance, presented in [9].…”
Section: Unsupervised Domain Adaptation (Uda)mentioning
confidence: 99%