2020
DOI: 10.1109/access.2020.2992804
|View full text |Cite
|
Sign up to set email alerts
|

One-Class Classification in Images and Videos Using a Convolutional Autoencoder With Compact Embedding

Abstract: In One-Class Classification (OCC) problems, the classifier is trained with samples of a class considered normal, such that exceptional patterns can be identified as anomalies. Indeed, for real-world problems, the representation of the normal class in the feature space is an important issue, considering that one or more clusters can describe different aspects of the normality. For classification purposes, it is important that these clusters be as compact (dense) as possible, for better discriminating anomalous … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(10 citation statements)
references
References 45 publications
0
10
0
Order By: Relevance
“…Although the DAE model has the ability to generate a feature space mapping at the output of the bottleneck coding layer, DAEs may be inefficient when applied directly to OCC problems, since the feature space mapping of the bottleneck can be sparse, i.e., it does not guarantee a compact mapping of the data in the bottleneck, which is an essential issue in OCC problems for a desirable result [ 36 ]. In this regard, it is applied as a framework for enhancing the compactness of clusters in the feature space.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Although the DAE model has the ability to generate a feature space mapping at the output of the bottleneck coding layer, DAEs may be inefficient when applied directly to OCC problems, since the feature space mapping of the bottleneck can be sparse, i.e., it does not guarantee a compact mapping of the data in the bottleneck, which is an essential issue in OCC problems for a desirable result [ 36 ]. In this regard, it is applied as a framework for enhancing the compactness of clusters in the feature space.…”
Section: Methodsmentioning
confidence: 99%
“…Further-more, there is a wide margin of separation between normal samples and anomalous samples and therefore outside the classification boundary. This effect on performance improvement is also addressed in [ 36 ].…”
Section: Introductionmentioning
confidence: 98%
“…Anomaly detection belongs to a family of machine learning tasks named one-class classification, in which a ML model is trained to perform a binary prediction on whether or not an input sample belongs to a particular class, and Auto Encoders are widely used for this goal [119] [120] [121]. The usual anomaly detection strategy to follow when using an AE, is to train the model only on non-anomalous samples.…”
Section: Artificial Neural Networkmentioning
confidence: 99%
“…Moreover, if the input data are underlying normal operations of the monitored apparatus, anomalous inputs can be detected by comparing them with the corresponding reconstructed outputs. However, the limitation of the AE is the capability to recognize what it shall be normal, namely, AEs are only able to detect if an anomaly occurs, but they are not capable to classify it [7]. As it will be discussed in the next section, several solutions have been presented in the related works to overcome the above limitation by recurring to conventional and more compact Machine Learning (ML) approaches or to more complex DL, which usually are challenged in finding an acceptable trade-off between the number of physical resources needed by their implementations, the processing speed, and the detection/classification accuracy.…”
Section: Introductionmentioning
confidence: 99%