2020
DOI: 10.1007/978-3-030-58589-1_13
|View full text |Cite
|
Sign up to set email alerts
|

Backpropagated Gradient Representations for Anomaly Detection

Abstract: Learning representations that clearly distinguish between normal and abnormal data is key to the success of anomaly detection. Most of existing anomaly detection algorithms use activation representations from forward propagation while not exploiting gradients from backpropagation to characterize data. Gradients capture model updates required to represent data. Anomalies require more drastic model updates to fully represent them compared to normal data. Hence, we propose the utilization of backpropagated gradie… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
47
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
3

Relationship

2
8

Authors

Journals

citations
Cited by 62 publications
(47 citation statements)
references
References 35 publications
0
47
0
Order By: Relevance
“…By adopting an adversarial loss to regularize and match the latent encoding distribution, AAEs can employ any arbitrary prior p(z), as long as sampling is feasible. Finally, other AE variants that have been applied to AD include RNN-based AEs [194], [231], [397], [398], convolutional AEs [54], AE ensembles [126], [398], and variants that constrain the gradients [399] or actively control the latent code topology [400] of an AE. AEs also have been utilized in two-step approaches that use AEs for dimensionality reduction and apply traditional methods on the learned embeddings [136], [401], [402].…”
Section: Autoencodersmentioning
confidence: 99%
“…By adopting an adversarial loss to regularize and match the latent encoding distribution, AAEs can employ any arbitrary prior p(z), as long as sampling is feasible. Finally, other AE variants that have been applied to AD include RNN-based AEs [194], [231], [397], [398], convolutional AEs [54], AE ensembles [126], [398], and variants that constrain the gradients [399] or actively control the latent code topology [400] of an AE. AEs also have been utilized in two-step approaches that use AEs for dimensionality reduction and apply traditional methods on the learned embeddings [136], [401], [402].…”
Section: Autoencodersmentioning
confidence: 99%
“…Besides, there are many other approaches contributing to OOD detection, such as GradCon [17], generalized ODIN [11] and FSSD [13]. Please refer to the papers for more details.…”
Section: Objective Methodsmentioning
confidence: 99%
“…Apart from their original utility as a tool to search for a converged solution, gradients have been utilized for various purposes, including visualization [14,15,16] and adversarial attack generation [3,17]. Gradients have also been explored to obtain effective representations [18,19,20,21,12] for many applications including image quality and saliency estimation, and out-of-distribution/anomaly/novelty detection. However, the effectiveness of gradient-based representations has not been fully explored in the application of open-set recognition.…”
Section: Related Workmentioning
confidence: 99%