2019
DOI: 10.1007/978-3-030-10925-7_11
|View full text |Cite
|
Sign up to set email alerts
|

Group Anomaly Detection Using Deep Generative Models

Abstract: Unlike conventional anomaly detection research that focuses on point anomalies, our goal is to detect anomalous collections of individual data points. In particular, we perform group anomaly detection (GAD) with an emphasis on irregular group distributions (e.g. irregular mixtures of image pixels). GAD is an important task in detecting unusual and anomalous phenomena in real-world applications such as high energy particle physics, social media and medical imaging. In this paper, we take a generative approach b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
55
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 57 publications
(56 citation statements)
references
References 24 publications
0
55
0
1
Order By: Relevance
“…2) Types of Anomalies: Various types of anomalies have been identified in the literature [161], [179]. These include point anomalies, conditional or contextual anomalies [169], [171], [191]- [195], and group or collective anomalies [146], [193], [196]- [199]. We extend these three established types by further adding low-level sensory anomalies and high-level semantic anomalies [200], a distinction that is particularly relevant for choosing between deep and shallow feature maps.…”
Section: ) What Is An Anomaly?mentioning
confidence: 99%
See 1 more Smart Citation
“…2) Types of Anomalies: Various types of anomalies have been identified in the literature [161], [179]. These include point anomalies, conditional or contextual anomalies [169], [171], [191]- [195], and group or collective anomalies [146], [193], [196]- [199]. We extend these three established types by further adding low-level sensory anomalies and high-level semantic anomalies [200], a distinction that is particularly relevant for choosing between deep and shallow feature maps.…”
Section: ) What Is An Anomaly?mentioning
confidence: 99%
“…The major approaches to deep AD include deep AE variants [44], [51], [54], [125]- [135], deep one-class classification [136]- [145], methods based on DGMs, such as GANs [50], [56], [146]- [151], and recent self-supervised methods [152]- [156]. In comparison to traditional AD methods, where a feature representation is fixed a priori (e.g., via a kernel feature map), these approaches aim to learn a feature map of the data φω : x → φω(x), a deep neural network parameterized with weights ω, as part of their learning objective.…”
Section: Introductionmentioning
confidence: 99%
“…As any compression algorithm implicitly maps the compressed vector into a feature space, and learning such mappings based on certain criteria (i.e., minimized loss) is a quintessential machine learning task, there is a close connection between machine learning and compression. Deep (convolutional) autoencoders are particularly suited to it [ 32 ] and can be tailored efficiently to variations of the problem such as group anomaly detection [ 33 ]. The rebuilt image is not a cleaned image, but a compressed one, unlike in denoising autoencoders [ 34 ], but those cannot be used here since we have no ground truth, i.e., no a priori information on which peak is good or not.…”
Section: Methodsmentioning
confidence: 99%
“…Likewise, the supervised dynamic tensor analysis (DTA) method [75] detected group anomalies for high dimension (HD) data by extracting data from time-based bibliographic data which is changing over time. Moreover for GAD, adversarial autoencoders-α [3] based on adversarial auto-encoders are applied to detect anomalies in networking and deep generative models (DGMs) [76] for detecting distribution-based group anomalies for image applications.…”
Section: Decomposition-based Techniquesmentioning
confidence: 99%