2022
DOI: 10.3389/fnbot.2022.1018383
|View full text |Cite
|
Sign up to set email alerts
|

Semantic enhanced for out-of-distribution detection

Abstract: While improving the performance on the out-of-distribution (OOD) benchmark dataset, the existing approach misses a portion of the valid discriminative information such that it reduces the performance on the same manifold OOD (SMOOD) data. The key to addressing this problem is to prompt the model to learn effective and comprehensive in-distribution (ID) semantic features. In this paper, two strategies are proposed to improve the generalization ability of the model to OOD data. Firstly, the original samples are … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(9 citation statements)
references
References 24 publications
0
9
0
Order By: Relevance
“…Zhang et al (Zhang et al 2020) find that the representations of inputs in DGMs can be approximated by fitted Gaussian and the distance between the distribution of representations of inputs and prior of the ID dataset can be utilized to detect OOD samples. Jiang et al (Jiang, Sun, and Yu 2022) propose to compare the training and test samples in the latent space of a flow model. However, these methods require retraining when encountering new ID datasets, which is computationally expensive and time-consuming.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Zhang et al (Zhang et al 2020) find that the representations of inputs in DGMs can be approximated by fitted Gaussian and the distance between the distribution of representations of inputs and prior of the ID dataset can be utilized to detect OOD samples. Jiang et al (Jiang, Sun, and Yu 2022) propose to compare the training and test samples in the latent space of a flow model. However, these methods require retraining when encountering new ID datasets, which is computationally expensive and time-consuming.…”
Section: Related Workmentioning
confidence: 99%
“…Since it is time-consuming and laborious to obtain labeled data in real scenarios, as an alternative, Deep Generative Models (DGMs) have been used to capture the sample distribution of In-Distribution (ID) datasets (Serrà et al 2020). However, most DGMs-based methods focus on elaborating architectures (Ren et al 2019;Serrà et al 2020), designing loss functions (Xiao, Yan, and Amit 2020) or statistical models (Zhang et al 2020;Jiang, Sun, and Yu 2022), targeting the specific feature representation or data distribution of ID samples (Sun et al 2023), i.e., need retraining to adapt to the normal pattern of the new ID datasets. This motivates the following unexplored question: How can we make OOD detection transferable across new ID datasets?…”
Section: Introductionmentioning
confidence: 99%
“…The method collects a bunch of operational data before processing and classifying them as in or out of distribution. For this reason, it falls in the category of groupwise methods [22]. Differently from pointwise, groupwise confirms a type of situation (in or out), without relying on a single point that could be a spike in a steady trend.…”
Section: A Incremental Techniquementioning
confidence: 99%
“…Deep generative models have received considerable attention over the past several years, with applications in data augmentation, 1,2 out-of-distribution detection, [3][4][5] and as a prior for compressed sensing. [6][7][8][9] To support this, a highresolution (HR) generative prior for medical imaging is needed and, accordingly, many papers have explored the training and improvement of such models.…”
Section: Introductionmentioning
confidence: 99%