2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00746
|View full text |Cite
|
Sign up to set email alerts
|

DADA: Depth-Aware Domain Adaptation in Semantic Segmentation

Abstract: Figure 1: We propose a novel depth-aware domain adaptation framework (DADA) to efficiently leverage depth as privileged information in the unsupervised domain adaptation setting. This example shows how semantic segmentation of a scene from the target domain benefits from the proposed approach, in comparison to state-of-the-art domain adaptation with no use of depth. In figure's top, we use different background colors (blue and red) to represent source and target information that are available during training. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
155
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 184 publications
(157 citation statements)
references
References 38 publications
(65 reference statements)
2
155
0
Order By: Relevance
“…Multi-task learning (MTL) is defined as the joint learning process of several tasks at once by either learning a shared feature representation [43], [44], or by implementing cross-task consistency checks into the training process [45], [46]. Depth estimation as used in this work has been shown to take and give profit through MTL with other tasks as, e.g., semantic segmentation [44], [47], [48], domain adaptation [7], optical flow estimation [49], [50], or 3D pose estimation [4], [24]. Particularly, self-supervised depth estimation has been combined with semantic segmentation [46], [51]- [53] or instance segmentation [36], [54] to mitigate the effect of moving objects, which violate the static world assumption made during training of such models.…”
Section: B Multi-task Learningmentioning
confidence: 99%
See 3 more Smart Citations
“…Multi-task learning (MTL) is defined as the joint learning process of several tasks at once by either learning a shared feature representation [43], [44], or by implementing cross-task consistency checks into the training process [45], [46]. Depth estimation as used in this work has been shown to take and give profit through MTL with other tasks as, e.g., semantic segmentation [44], [47], [48], domain adaptation [7], optical flow estimation [49], [50], or 3D pose estimation [4], [24]. Particularly, self-supervised depth estimation has been combined with semantic segmentation [46], [51]- [53] or instance segmentation [36], [54] to mitigate the effect of moving objects, which violate the static world assumption made during training of such models.…”
Section: B Multi-task Learningmentioning
confidence: 99%
“…which considers the number of true positives (T P s ), false negatives (F N s ), and false positives (F P s ) between the estimated segmentation mask m and the ground truth segmentation mask m for each class s. Note that T P s , F N s and F P s are calculated over an entire test set and only afterwards the mIoU is obtained according to (7). For online performance observation, however, the segmentation performance has to be predicted and evaluated on a single-image basis (image index n) in order to be real-time capable.…”
Section: Performance Evaluation Metricsmentioning
confidence: 99%
See 2 more Smart Citations
“…There are also recent improvements in language understanding with Bidirectional Encoder Representations from Transformers (BERT) [9] and A Robustly Optimized BERT Pretraining Approach (RoBERTa) [10], and, adding to that, the recent breakthrough in task agnostic transfer learning by Howard et al [11]. In CV, DL has advanced, inter alia, the tasks of image classification [12,13], object-detection [14][15][16], object-tracking [17], pose estimation [18][19][20][21], superresolution [22], and semantic segmentation [23][24][25][26][27][28]. These advancements give rise to new applications in, e.g., solid-state materials science and chemical sciences [29,30], meteorology [31], medicine [32][33][34][35][36][37][38][39], seismology [40][41][42], biology [43], life sciences in general [44], chemistry [45], and physics [46][47][48][49][50][51][52]…”
Section: Introductionmentioning
confidence: 99%