2021
DOI: 10.1109/tip.2020.3018221
|View full text |Cite
|
Sign up to set email alerts
|

Affinity Space Adaptation for Semantic Segmentation Across Domains

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
27
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 59 publications
(28 citation statements)
references
References 53 publications
1
27
0
Order By: Relevance
“…UDA aims to generalize the model learned from the labeled source domain to another unlabeled target domain. In the field of UDA, a group of approaches has shown promising results in object detection [9]- [22] and semantic segmentation [23]- [45], [53], [54]. The current mainstream approaches of these two tasks include adversarial learning [10]- [12], [26], [29], [55], selftraining [8], [41], [42], [56] and self-ensembling [39], [54], [57]- [63].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…UDA aims to generalize the model learned from the labeled source domain to another unlabeled target domain. In the field of UDA, a group of approaches has shown promising results in object detection [9]- [22] and semantic segmentation [23]- [45], [53], [54]. The current mainstream approaches of these two tasks include adversarial learning [10]- [12], [26], [29], [55], selftraining [8], [41], [42], [56] and self-ensembling [39], [54], [57]- [63].…”
Section: Related Workmentioning
confidence: 99%
“…Annotating a large-scale dataset for each new domain is cost-expensive and timeconsuming. Unsupervised domain adaptation (UDA) emerges, which shows promising results on object detection [9]- [22] and semantic segmentation [23]- [45], aiming to reduce the domain shifts between the source and the target domains.…”
Section: Introductionmentioning
confidence: 99%
“…Chang et al [11] construct the DISE framework which extracts domain-invariant structure and domain-specific texture information to reduce sourcetarget discrepancies. More recent works further prioritize category-level alignment (CLAN) [63], minimize adversarial entropy [67], or perform affinity-space domain adaptation [68].…”
Section: Unsupervised Domain Adaptive Semantic Segmentationmentioning
confidence: 99%
“…* denotes further adding source images from WildDash to complement Cityscapes. 68. 8.30 75.80 9.49 21.64 15.91 5.85 9.26 71.08 31.50 85.13 6.55 1.68 55.48 24.91 30.22 0.52 0.53 17.00 DANet S S 38.51 61.78 21.11 74.59 22.59 29.93 14.79 15.00 10.17 66.94 19.03 82.57 31.03 21.24 53.26 54.67 37.77 39.40 43.84 31.95 DANet A S 39.16 61.34 20.71 76.52 20.53 30.03 14.19 15.69 10.09 68.60 18.84 82.08 33.16 21.75 57.68 53.88 40.33 41.47 46.11 31.00 DANet S+A S 39.28 62.43 21.89 76.22 21.42 30.54 14.85 14.10 9.76 69.07 19.94 82.84 34.56 19.30 56.51 53.04 42.51 39.47 45.71 32.09 DANet S R 39.46 62.75 23.17 76.65 23.90 30.82 14.84 18.44 10.09 69.10 17.60 82.78 33.51 21.53 55.97 51.78 41.77 36.90 46.11 32.12 DANet S+A R 39.76 63.11 24.63 76.17 25.03 30.56 13.68 15.68 10.53 67.31 22.41 80.15 32.95 21.11 54.39 53.51 43.64 42.20 46.71 31.66 DANet S+A+F R 40.52 62.90 25.58 76.62 24.45 30.37 14.45 16.75 9.96 67.87 19.70 82.04 34.18 22.95 56.99 54.27 44.15 47.75 46.98 31.86 DANet-SSL S+A R 41.39 67.24 27.98 77.18 25.11 25.80 15.33 10.59 6.58 69.24 33.89 80.96 32.18 5.29 69.86 59.70 36.20 65.99 47.47 29.87 DANet-SSL S+A+F R 41.99 70.21 30.24 78.44 26.72 28.44 14.02 11.67 5.79 68.54 38.20 85.97 28.14 0.00 70.36 60.49 38.90 77.80 39.85 24.02 .63 35.30 78.52 25.27 33.51 14.43 13.80 7.31 63.52 34.94 84.31 34.54 19.08 70.05 49.14 48.80 75.11 47.53 35.36 DANet-SSL* S+A+F R 44.66 75.85 34.21 82.58 28.75 35.58 18.51 12.65 12.49 71.33 37.51 89.80 38.68 15.99 76.59 62.81 12.25 61.56 48.18 33.26…”
mentioning
confidence: 99%
“…Pixels neighborhood relationship approach was recently studied to solve the above problem. Zhou [8] presented an affinity space for semantic segmentation International Journal of Intelligent Engineering and Systems, Vol. 15 that highlights structure using co-occurring output patterns between neighboring pixels.…”
Section: Introductionmentioning
confidence: 99%