2021
DOI: 10.48550/arxiv.2108.12043
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning Disentangled Representations in the Imaging Domain

Xiao Liu,
Pedro Sanchez,
Spyridon Thermos
et al.

Abstract: Disentangled representation learning has been proposed as an approach to learning general representations. This can be done in the absence of, or with limited, annotations. A good general representation can be readily fine-tuned for new target tasks using modest amounts of data, or even be used directly in unseen domains achieving remarkable performance in the corresponding task. This alleviation of the data and annotation requirements offers tantalising prospects for tractable and affordable applications in c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 116 publications
(183 reference statements)
0
2
0
Order By: Relevance
“…A latent representation is disentangled if each dimension in the latent space is sensitive to one generative factor and comparably invariant to the changes in the other factors (Liu et al, 2021). Such a disentangled representation is a great asset for interpretability.…”
Section: Variational Autoencoder (Vae) and β-Vaementioning
confidence: 99%
“…A latent representation is disentangled if each dimension in the latent space is sensitive to one generative factor and comparably invariant to the changes in the other factors (Liu et al, 2021). Such a disentangled representation is a great asset for interpretability.…”
Section: Variational Autoencoder (Vae) and β-Vaementioning
confidence: 99%
“…Learning independent and semantic representations whose individual dimensions have interpretable meaning is usually referred to as disentangled representations learning (DRL) [2], [9], [10]. On the other hand, disentangled representation learning [11] aims to learn the representation of the underlying explainable factors behind the observed data and it is considered one of the possible ways for AI to fundamentally understand the world.…”
Section: Introductionmentioning
confidence: 99%