2019
DOI: 10.1007/978-3-030-21074-8_31
|View full text |Cite
|
Sign up to set email alerts
|

Anomaly Detection Using GANs for Visual Inspection in Noisy Training Data

Abstract: The detection and the quantification of anomalies in image data are critical tasks in industrial scenes such as detecting micro scratches on product. In recent years, due to the difficulty of defining anomalies and the limit of correcting their labels, research on unsupervised anomaly detection using generative models has attracted attention. Generally, in those studies, only normal images are used for training to model the distribution of normal images. The model measures the anomalies in the target images by… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
20
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 20 publications
(20 citation statements)
references
References 15 publications
0
20
0
Order By: Relevance
“…We use similar setup as in (You, Robinson, and Vidal 2017), the experimental results are presented in Table 3. The results of the other methods are from (Sabokrou et al 2018) and (Kimura and Yanagihara 2018). These results show that even for a small number of training samples, our method performs at least as well as the state-of-the-art algorithms, and in many cases it superior to them.…”
Section: Results On Caltech-256mentioning
confidence: 82%
See 2 more Smart Citations
“…We use similar setup as in (You, Robinson, and Vidal 2017), the experimental results are presented in Table 3. The results of the other methods are from (Sabokrou et al 2018) and (Kimura and Yanagihara 2018). These results show that even for a small number of training samples, our method performs at least as well as the state-of-the-art algorithms, and in many cases it superior to them.…”
Section: Results On Caltech-256mentioning
confidence: 82%
“…Considering that Caltech-256 is a benchmark dataset for outlier detection task, in addition to DAOC, we compare our method with 7 other methods therein designed specifically for detecting outliers. Those methods include Coheerence Pursuit (CoP) (Rahmani and Atia 2017), OutlierPursuit (Xu, Caramanis, and Sanghavi 2010), REAPER(Lerman et al 2015), Dual Principal Component Pursuit (DPCP) (Tsakiris and Vidal 2015), Low-Rank Representation (LRR) (Liu, Lin, and Yu 2010), OutRank (Moonesignhe and Tan 2006), and inductive semi-supervised GAN (SSGAN) (Kimura and Yanagihara 2018).…”
Section: Results On Caltech-256mentioning
confidence: 99%
See 1 more Smart Citation
“…Data Representation. Existing methods rely on low level features, e.g., Histogram of Oriented Gradients [29], local patterns [30]- [34]; high level features, e.g., bag of words [35], trajectories [36]; or deep-learned features [8], [9], [23], [24], [26], [27], [37], [38]. The latter have gained traction in the past years since they tend to outperform low or high-level features designed for general purposes since the deep-features are tuned for each particular task.…”
Section: Related Workmentioning
confidence: 99%
“…For our experiments on MNIST, Caltech-256, Coil-100, and Yale B, we compared our proposed methods against a set of methods, namely LOF [18], DRAE [10], R and RD [9], GPND [24], Coherent Pursuit (CoP) [20], REAPER [14], Outlier Pursuit (OP) [39], LRR [75], DPCP [76], 1 -thresh. [77], R-Graph [11], AnoGAN [27], and AGAN [23].…”
Section: B Comparison Against Baselinesmentioning
confidence: 99%