2020
DOI: 10.48550/arxiv.2011.02578
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning and Evaluating Representations for Deep One-class Classification

Kihyuk Sohn,
Chun-Liang Li,
Jinsung Yoon
et al.

Abstract: We present a two-stage framework for deep one-class classification. We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations. The framework not only allows to learn better representations, but also permits building one-class classifiers that are faithful to the target task. In particular, we present a novel distribution-augmented contrastive learning that extends training distributions via data augmentation to obstruct the uniformity of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
72
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(73 citation statements)
references
References 30 publications
0
72
1
Order By: Relevance
“…Self-supervised learning has increasingly gained attention for Out of Distribution (OoD) detection [32]- [34]. OoD is a problem closely related to anomaly detection, where the objective is to detect samples that do not belong to the distribution of a given dataset.…”
Section: Edge Features: Identify Anomalous Connectionsmentioning
confidence: 99%
“…Self-supervised learning has increasingly gained attention for Out of Distribution (OoD) detection [32]- [34]. OoD is a problem closely related to anomaly detection, where the objective is to detect samples that do not belong to the distribution of a given dataset.…”
Section: Edge Features: Identify Anomalous Connectionsmentioning
confidence: 99%
“…CSI [19] treats augmented input as positive samples and the distributionally-shifted input as negative samples. DROC [20] shares a similar technical formulation as CSI without any test-time augmentation nor ensemble of models.…”
Section: Related Workmentioning
confidence: 99%
“…We compare our approach with the top current self-supervised and pre-trained feature adaptation methods [2,16,19,20,1]. Results that were reported in the original papers were copied.…”
Section: Comparison On Standard Datasetsmentioning
confidence: 99%
“…DAGMM [15] and DSEBM [16] are methods that belongs to this category. ii) One-class classifier-based methods fit a classifier, e.g., Deep SVDD [17] and DROC [18], to separate normal and all other data and then use it to detect anomalies. iii) Reconstruction-based techniques learn a reconstruction model, e.g., AnoGAN [19], of normal images and detect anomalies as samples with high reconstruction error.…”
Section: Related Workmentioning
confidence: 99%