2021
DOI: 10.48550/arxiv.2105.02965
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Out-of-distribution Detection and Generation using Soft Brownian Offset Sampling and Autoencoders

Abstract: Deep neural networks often suffer from overconfidence which can be partly remedied by improved outof-distribution detection. For this purpose, we propose a novel approach that allows for the generation of outof-distribution datasets based on a given in-distribution dataset. This new dataset can then be used to improve out-of-distribution detection for the given dataset and machine learning task at hand. The samples in this dataset are with respect to the feature space close to the in-distribution dataset and t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 21 publications
0
1
0
Order By: Relevance
“…Methodology References Generate OOD data by using ID data [48,49] Lightweight Detection of Out-of-Distribution and Adversarial Samples via Channel Mean Discrepancy [50] Learn the weights of training samples to eliminate the dependence between features and false correlations [51] The strong link between discovering the causal structure of the data and finding reliable features [52,53] Holochain-based security and privacy-preserving framework [54] Enhance robustness of Out-of-Distribution [55-58] The (OOD) detection problem in DNN as a statistical hypothesis testing problem [59] The linear classifier obtained by minimizing the cross-entropy loss after the graph convolution generalizes to out-of-distribution data [45,60,61] Invariant risk minimization (IRM) solves the prediction problem [62] 10…”
Section: Numbermentioning
confidence: 99%
“…Methodology References Generate OOD data by using ID data [48,49] Lightweight Detection of Out-of-Distribution and Adversarial Samples via Channel Mean Discrepancy [50] Learn the weights of training samples to eliminate the dependence between features and false correlations [51] The strong link between discovering the causal structure of the data and finding reliable features [52,53] Holochain-based security and privacy-preserving framework [54] Enhance robustness of Out-of-Distribution [55-58] The (OOD) detection problem in DNN as a statistical hypothesis testing problem [59] The linear classifier obtained by minimizing the cross-entropy loss after the graph convolution generalizes to out-of-distribution data [45,60,61] Invariant risk minimization (IRM) solves the prediction problem [62] 10…”
Section: Numbermentioning
confidence: 99%