2020
DOI: 10.1007/978-3-030-61609-0_3
|View full text |Cite
|
Sign up to set email alerts
|

From Imbalanced Classification to Supervised Outlier Detection Problems: Adversarially Trained Auto Encoders

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 6 publications
(11 citation statements)
references
References 15 publications
0
11
0
Order By: Relevance
“…While ATA incorporates not only toxic but also non-toxic samples in the training procedure, OCA's training routine exposes the model only to toxic samples, leading to underperformance even compared to the random BASE baseline. This is a well-known problem which especially arises when the inliers correlate with outliers in feature space [3,4]. Interestingly, all methods perform superior on the insult test set, compared to the others.…”
Section: Resultsmentioning
confidence: 95%
See 1 more Smart Citation
“…While ATA incorporates not only toxic but also non-toxic samples in the training procedure, OCA's training routine exposes the model only to toxic samples, leading to underperformance even compared to the random BASE baseline. This is a well-known problem which especially arises when the inliers correlate with outliers in feature space [3,4]. Interestingly, all methods perform superior on the insult test set, compared to the others.…”
Section: Resultsmentioning
confidence: 95%
“…In order to address this issue of diverse and previously unknown toxicity types, we present a comparative analysis of classification and outlier detection In this work, we consider three different types of methods for toxicity detection, namely a) representation learning based outlier detectors, b) ensemble methods and c) traditional deep neural networks. In the first case, a representation of the normal class (here, toxic class) is being learned and any sample that is very dissimilar from this representation is being rejected as an outlier [3,4,5,6]. In practice, this methodology has been successfully applied within a wide spectrum of domains, such as medicine [7], fraud detection [8] or intrusion detection [9].…”
Section: Introductionmentioning
confidence: 99%
“…In the related OCC setting with its focus on outlier detection, DNN-based approaches have been researched from three angles: (1) combining kernel methods [65] with DNN methods [14,60,71], (2) generative models (e.g., generative adversarial networks [22] or variational autoencoders [35])based outlier detectors [50,64,72], and (3) based on (semi-) supervised autoencoders [8,10,26,44,45,47]. Here, the key idea is to learn a representation of the inlier distribution and subsequently to estimate the outlierness of a sample via its reconstruction error.…”
Section: Related Workmentioning
confidence: 99%
“…To this end, we propose the decoupling autoencoder (DAE) method, a novel autoencoder-based architecture that learns a radial basis function (RBF) kernel mapping the reconstruction error to class probabilities. The reconstruction error as a measure of outlierness is learned by a novel adversarial loss function that separates inliers from rest samples in reconstruction error space and is based on gradient ascend as suggested by Lübbering et al [45]. The inlier and outlier distributions are separated by a decision boundary that is optimized end-to-end to be as close as possible to the inlier distribution.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation