2020
DOI: 10.48550/arxiv.2006.11629
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

G2D: Generate to Detect Anomaly

Abstract: In this paper, we propose a novel method for irregularity detection. Previous researches solve this problem as a One-Class Classification (OCC) task where they train a reference model on all of the available samples. Then, they consider a test sample as an anomaly if it has a diversion from the reference model. Generative Adversarial Networks (GANs) have achieved the most promising results for OCC while implementing and training such networks, especially for the OCC task, is a cumbersome and computationally ex… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 29 publications
0
1
0
Order By: Relevance
“…However, their vulnerability against adversarial samples makes them insecure for critical tasks. They are prone to a class of attacks conducted in order to sabotage their performance referred to as adversarial attacks, which could cause missclassification or confidence reduction in sensitive tasks such as malware/anomaly detection [1,2] or autonomous driving [3] hence, a measure to face these attacks is of utmost importance. Generally, three strategies are presented: (1) adversarial training [4], (2) defining a robust loss for learning CNNs [5], and (3) refining [6] The earlier approaches were to learn the targeted CNN on both clean and adversarial samples as training set.…”
mentioning
confidence: 99%
“…However, their vulnerability against adversarial samples makes them insecure for critical tasks. They are prone to a class of attacks conducted in order to sabotage their performance referred to as adversarial attacks, which could cause missclassification or confidence reduction in sensitive tasks such as malware/anomaly detection [1,2] or autonomous driving [3] hence, a measure to face these attacks is of utmost importance. Generally, three strategies are presented: (1) adversarial training [4], (2) defining a robust loss for learning CNNs [5], and (3) refining [6] The earlier approaches were to learn the targeted CNN on both clean and adversarial samples as training set.…”
mentioning
confidence: 99%