2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP) 2017
DOI: 10.1109/mlsp.2017.8168155
|View full text |Cite
|
Sign up to set email alerts
|

Limiting the reconstruction capability of generative neural network using negative learning

Abstract: Generative models are widely used for unsupervised learning with various applications, including data compression and signal restoration. Training methods for such systems focus on the generality of the network given limited amount of training data. A less researched type of techniques concerns generation of only a single type of input. This is useful for applications such as constraint handling, noise reduction and anomaly detection. In this paper we present a technique to limit the generative capability of t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
38
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 57 publications
(40 citation statements)
references
References 13 publications
(15 reference statements)
2
38
0
Order By: Relevance
“…For example, [32] relies on the previous frame to predict the non-anomalous appearance of the road in the current one. In [8,33], input patches are compared to the output of a shallow autoencoder trained on the road texture, which makes it possible to localize the obstacle. These methods, however, are very specific to roads and lack generality.…”
Section: Anomaly Detection Via Resynthesismentioning
confidence: 99%
See 1 more Smart Citation
“…For example, [32] relies on the previous frame to predict the non-anomalous appearance of the road in the current one. In [8,33], input patches are compared to the output of a shallow autoencoder trained on the road texture, which makes it possible to localize the obstacle. These methods, however, are very specific to roads and lack generality.…”
Section: Anomaly Detection Via Resynthesismentioning
confidence: 99%
“…1, these methods therefore fail to detect the unexpected. The second trend consists of leveraging autoencoders to detect anomalies [8,33,1], assuming that never-seen-before objects will be decoded poorly. We found, however, that autoencoders tend to learn to simply generate a lower-quality version of the input image.…”
Section: Introductionmentioning
confidence: 99%
“…In [78], the authors propose to limit the reconstruction capability of the generative adversarial networks by learning conflicting objectives for the normal and anomalous data. They use negative examples to enforce explicit poor reconstruction.…”
Section: Controlling Reconstruction For Anomaly Detectionmentioning
confidence: 99%
“…Negative learning is a technique used for regularizing the training of the AE in the presence of labelled data by limiting reconstruction capability (LRC) [13].…”
Section: Negative Learningmentioning
confidence: 99%
“…This can also be the case when some abnormal data share some characteristics of normal data in the training set or when the decoder is "too powerful" to properly decode abnormal codings. To solve the shortcomings of autoencoders, [13,18] proposed the negative learning technique that aims to control the compressing capacity of an autoencoder by optimizing conflicting objectives of normal and abnormal data. Thus, this approach looks for a solution in the gradient direction for the desired normal input and in the opposite direction for the undesired input.…”
Section: Introductionmentioning
confidence: 99%