2019
DOI: 10.1007/978-3-030-33391-1_3
|View full text |Cite
|
Sign up to set email alerts
|

Multi-layer Domain Adaptation for Deep Convolutional Networks

Abstract: Despite their success in many computer vision tasks, convolutional networks tend to require large amounts of labeled data to achieve generalization. Furthermore, the performance is not guaranteed on a sample from an unseen domain at test time, if the network was not exposed to similar samples from that domain at training time. This hinders the adoption of these techniques in clinical setting where the imaging data is scarce, and where the intra-and inter-domain variance of the data can be substantial. We propo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(10 citation statements)
references
References 9 publications
0
10
0
Order By: Relevance
“…As we show in the experimental section, training the U-Net network uniquely with synthetically generated data S s does not generalize well to real depth maps. In order to narrow the domain gap between real and synthetic depth maps, we introduce a multi-layer domain adaptation strategy [17]. More specifically, the features extracted at each encoder block of the U-Net, are forwarded to multiple classifiers, which aim at distinguishing between the feature maps belonging to real (X r i ) or synthetic (X s i ) samples.…”
Section: B Modelmentioning
confidence: 99%
“…As we show in the experimental section, training the U-Net network uniquely with synthetically generated data S s does not generalize well to real depth maps. In order to narrow the domain gap between real and synthetic depth maps, we introduce a multi-layer domain adaptation strategy [17]. More specifically, the features extracted at each encoder block of the U-Net, are forwarded to multiple classifiers, which aim at distinguishing between the feature maps belonging to real (X r i ) or synthetic (X s i ) samples.…”
Section: B Modelmentioning
confidence: 99%
“…Many novel methods have emerged recently [46,54,62,70]. For instance, Ciga et al [5] introduced domain discriminators at multiple layers of deep networks. However, we found that simply applying DA loss to multiple layers in multi-exit architectures [27] is not satisfactory.…”
Section: Related Workmentioning
confidence: 99%
“…By using λ = 0, the model becomes similar to (Drozdzal et al, 2018), where the generator is not constrained to produce realistic images. In contrast, for a large λ, our model becomes similar to an adversarial domain classifier (Ciga et al, 2019), where generated images are normalized across different domains but not optimal for segmentation, with added realism constraints.…”
Section: Adversarial Trainingmentioning
confidence: 99%
“…For instance, Onofrey et al (2019) evaluate the benefit of using different normalization techniques to multi-site prostate MRI before applying deep learning-based segmentation. Recently, a few studies have explored the potential of learning methods for dynamic data augmentation and normalization (Drozdzal et al, 2018;Ciga et al, 2019;Hesse et al, 2020) as well as image denoising (Oguz et al, 2020). Drozdzal et al (2018) use two consecutive fully-convolutional CNNs, a pre-processor network followed by a segmentation network trained with a Dice metric, to normalize an input image prior to segmentation.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation