2019
DOI: 10.1007/978-3-030-32251-9_32
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Anomaly Localization Using Variational Auto-Encoders

Abstract: An assumption-free automatic check of medical images for potentially overseen anomalies would be a valuable assistance for a radiologist. Deep learning and especially Variational Auto-Encoders (VAEs) have shown great potential in the unsupervised learning of data distributions. In principle, this allows for such a check and even the localization of parts in the image that are most suspicious. Currently, however, the reconstruction-based localization by design requires adjusting the model architecture to the sp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
90
1

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 100 publications
(93 citation statements)
references
References 9 publications
2
90
1
Order By: Relevance
“…Anomaly detection or outlier detection is a lasting yet active research area in machine learning [35]- [37], which is a key technique to overcome the data bottleneck [38]. A natural choice for handling this problem is one-class classification methods, such as OC-SVM [39], SVDD [40], Deep SVDD [41] and 1-NN.…”
Section: B Anomaly Detectionmentioning
confidence: 99%
See 1 more Smart Citation
“…Anomaly detection or outlier detection is a lasting yet active research area in machine learning [35]- [37], which is a key technique to overcome the data bottleneck [38]. A natural choice for handling this problem is one-class classification methods, such as OC-SVM [39], SVDD [40], Deep SVDD [41] and 1-NN.…”
Section: B Anomaly Detectionmentioning
confidence: 99%
“…Several extensions such as context encoder [47], constrained VAE [48], adversarial autoencoder [48], GMVAE [49], Bayesian VAE [50] and anoVAEGAN [51] improved the accuracy of the projection. Based on the pretrained projection, You et al [49] restored the lesion area by involving an optimization on the latent manifold, while Zimmerer et al [38] located the anomaly with a term derived from the Kullback-Leibler (KL)-divergence.…”
Section: B Anomaly Detectionmentioning
confidence: 99%
“…So, they enhanced the representative ability of an auto-encoder based model by imposing a consistency in the latent space to constrain the encoder to find a latent space where the projections of the input image and the reconstructed image are close to each other. Zimmerer et al [43] used a variational auto-encoder with the Kullback-Leibler divergence to measure reconstruction errors. The AEs are able to simulate non-linear transformations from the latent space to input data, and then to detect anomalies as a deviation from the transforms by measuring the reconstruction error.…”
Section: Related Workmentioning
confidence: 99%
“…The default approach consists in using the reconstruction loss to identify samples with anomalies, based on the assumption that the VAE will reconstruct their anomaly-free versions. However, recent results suggest that the KL divergence is actually a better anomaly score [3]. This can be caused by the high representational power of VAEs, which can reconstruct even (previously unseen) anomalies.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, methods based on Variational Auto-Encoders (VAEs) have been proposed to identify and localize anomalies in medical images [2,3,4]. VAEs are generative models trained by minimizing a loss function composed of a reconstruction term (measuring the distance between original images and reconstructions) and a Kullback-Leibler (KL) divergence term (measuring the distance between the latent distribution and a prior, generally assumed to be Gaussian).…”
Section: Introductionmentioning
confidence: 99%