2021
DOI: 10.1016/j.neucom.2021.04.089
|View full text |Cite
|
Sign up to set email alerts
|

VAE-based Deep SVDD for anomaly detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
22
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 72 publications
(23 citation statements)
references
References 18 publications
1
22
0
Order By: Relevance
“…It has also been shown that VAE models are capable of learning representations with disentangled factors (Higgins et al, 2016a) due to the isotropic Gaussian priors on the latent variable, the known power of the Bayesian models. The better performance of VAE compared to AE models has also been shown previously in other applications such as anomaly detection, object identification, and BCIs (Dai et al, 2019;Tahir et al, 2021;Zhou et al, 2021). As an additional point, comparing the results of VAE with SVAE, suggests the added value of supervised learning in training better models.…”
Section: Impact Of Variational Inference In Feature Learningsupporting
confidence: 72%
“…It has also been shown that VAE models are capable of learning representations with disentangled factors (Higgins et al, 2016a) due to the isotropic Gaussian priors on the latent variable, the known power of the Bayesian models. The better performance of VAE compared to AE models has also been shown previously in other applications such as anomaly detection, object identification, and BCIs (Dai et al, 2019;Tahir et al, 2021;Zhou et al, 2021). As an additional point, comparing the results of VAE with SVAE, suggests the added value of supervised learning in training better models.…”
Section: Impact Of Variational Inference In Feature Learningsupporting
confidence: 72%
“…This aspect involves several weaknesses in using an AE for anomaly detection tasks rather than a VAE, whom probabilistic encoder models the distribution parameters of the latent variables rather than the latent variables theirselves [14], thus capturing more data variability and resulting in a more homogeneous latent space than a standard AE. In [48], a general anomaly detection method based on VAE and Support Vector Data Description (SVDD) [49] is proposed where the SVDD decision boundary is simultaneously learnt with the latent representations of data and fitted on them, in order to avoid the hypersphere collapse, i.e. the mapping of all data to a single point in latent space [50].…”
Section: Related Workmentioning
confidence: 99%
“…Leveraging on this formulation, VAE training can be performed by maximizing the ELBO [48]. However, the expected reconstruction error requires the sampling of random latent variables z from the approximated posterior q φ (z|x), which makes the training intractable in practice since the gradient of the ELBO with respect to the parameters φ can not be estimated.…”
Section: Variational Autoencodermentioning
confidence: 99%
“…The article [15] presents a deep support vector data description based on a variational autoencoder (Deep SVDD-VAE). In the proposed model, the VAE is used to reconstruct the input instances, while a spherical discriminative boundary is learned with the latent representations simultaneously based on the SVDD.…”
Section: Similar Solutionsmentioning
confidence: 99%