2019
DOI: 10.1007/978-3-030-32245-8_37
|View full text |Cite
|
Sign up to set email alerts
|

Constrained Domain Adaptation for Segmentation

Abstract: We propose to adapt segmentation networks with a constrained formulation, which embeds domain-invariant prior knowledge about the segmentation regions. Such knowledge may take the form of simple anatomical information, e.g., structure size or shape, estimated from source samples or known a priori. Our method imposes domaininvariant inequality constraints on the network outputs of unlabeled target samples. It implicitly matches prediction statistics between target and source domains with permitted uncertainty o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4
1

Relationship

3
7

Authors

Journals

citations
Cited by 30 publications
(16 citation statements)
references
References 20 publications
0
16
0
Order By: Relevance
“…They assume that the high-level features extracted by fullyconnected layers have large domain shift, thus CMD is imposed on these layers to perform adaptation. Bateson et al [116] propose an unsupervised constrained DA framework for disc MR image segmentation. They propose to use some useful prior knowledge that is invariant across domains as an inequality constraint, and impose such constraints on the predicted results of unlabeled target samples as the domain adaptation regularization.…”
Section: Unsupervised Deep Damentioning
confidence: 99%
“…They assume that the high-level features extracted by fullyconnected layers have large domain shift, thus CMD is imposed on these layers to perform adaptation. Bateson et al [116] propose an unsupervised constrained DA framework for disc MR image segmentation. They propose to use some useful prior knowledge that is invariant across domains as an inequality constraint, and impose such constraints on the predicted results of unlabeled target samples as the domain adaptation regularization.…”
Section: Unsupervised Deep Damentioning
confidence: 99%
“…Nevertheless, their formulation is limited to linear constraints. More recently, inequality constraints have been tackled by augmenting the learning objective with a penalty-based function, e.g., L 2 penalty, which can be imposed within a continuous optimization framework [5,18,19], or in the discrete domain [28]. Despite these methods have demonstrated remarkable performance in weakly supervised segmentation, they require that prior knowledge, exact or approximate, is given.…”
Section: Related Workmentioning
confidence: 99%
“…Weakly-supervised learning methods can significantly reduce the cost of annotation that is needed to collect a training set. The methods differ by the type of weak annotation they rely on, such as image-level labels [12], points [2], partial labels [13] or global image statistics [1]. In this work, we build upon the recent papers that have focused on training neural networks using pseudo-annotations generated from bounding boxes.…”
Section: Related Workmentioning
confidence: 99%