2019 IEEE International Symposium on Multimedia (ISM) 2019
DOI: 10.1109/ism46123.2019.00045
|View full text |Cite
|
Sign up to set email alerts
|

Deep Autoencoders with Value-at-Risk Thresholding for Unsupervised Anomaly Detection

Abstract: Many real-world monitoring and surveillance applications require non-trivial anomaly detection to be run in the streaming model. We consider an incremental-learning approach, wherein a deep-autoencoding (DAE) model of what is normal is trained and used to detect anomalies at the same time.In the detection of anomalies, we utilise a novel thresholding mechanism, based on value at risk (VaR). We compare the resulting convolutional neural network (CNN) against a number of subspace methods, and present results on … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 37 publications
0
2
0
Order By: Relevance
“…Pimentel et al [18] integrated autoencoders with active learning, enhancing unsupervised anomaly detection models. Akhriev et al [19] combined regular data deep autoencoding with unique thresholding techniques to detect anomalies. The use of autoencoders as a foundational element in BENet architecture for cross-domain robust face forgery detection aligns with their demonstrated effectiveness in anomaly identification and data representation.…”
Section: Autoencodermentioning
confidence: 99%
“…Pimentel et al [18] integrated autoencoders with active learning, enhancing unsupervised anomaly detection models. Akhriev et al [19] combined regular data deep autoencoding with unique thresholding techniques to detect anomalies. The use of autoencoders as a foundational element in BENet architecture for cross-domain robust face forgery detection aligns with their demonstrated effectiveness in anomaly identification and data representation.…”
Section: Autoencodermentioning
confidence: 99%
“…In the second step, we consider the individual residuals as samples of an empirical distribution, and take the value at risk (VaR) at λ as a threshold. We provide details in (Akhriev and Marecek 2019;Akhriev, Marecek, and Simonetto 2018). The test as to whether residual at each sensor is below the threshold results in a binary map, suggesting whether the observation of each sensor is likely to have come from our model or not.…”
Section: The Overall Schemamentioning
confidence: 99%