2021
DOI: 10.1007/978-3-030-86523-8_48
|View full text |Cite
|
Sign up to set email alerts
|

Label-Assisted Memory Autoencoder for Unsupervised Out-of-Distribution Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 15 publications
1
4
0
Order By: Relevance
“…Similarly, the overconfident phenomenon is also reported in (Choi, Jang, and Alemi 2018;Denouden et al 2018;Gong et al 2019;Nalisnick et al 2018;Zhang et al 2021). Lastly, the reconstruction error distribution of "hard" OODs which contain richer contents and diverse pixels compared to ID is skewed to the right side as expected, and this is consistent with the reconstruction error assumption.…”
Section: Adjustment Coefficientsupporting
confidence: 79%
See 3 more Smart Citations
“…Similarly, the overconfident phenomenon is also reported in (Choi, Jang, and Alemi 2018;Denouden et al 2018;Gong et al 2019;Nalisnick et al 2018;Zhang et al 2021). Lastly, the reconstruction error distribution of "hard" OODs which contain richer contents and diverse pixels compared to ID is skewed to the right side as expected, and this is consistent with the reconstruction error assumption.…”
Section: Adjustment Coefficientsupporting
confidence: 79%
“…(b) an autoencoder (AE), including encoder f enc to compress high-dimensional data features with parameters θ enc and deconder f dec recreating x denoted by x from the latent representation z with parameters θ dec . Different from works (Oza and Patel 2019; Zhang et al 2021) integrating classifier and autoencoder in one hybrid model simultaneously and utilizing raw pixels reconstruction error as score function, the CLF and AE modules in our method are independent of each other. Furthermore, we transform the reconstruction error to CLF latent space instead of pixels space for further aggregation.…”
Section: Overall Conceptmentioning
confidence: 97%
See 2 more Smart Citations
“…Secondly, introducing additional loss term significantly deviates the training objective from its original task, i.e., to obtain satisfactory classification. To alleviate these constraints, few works in the literature have explored unsupervised detection of OOD samples, i.e., without using any OOD sample during training phase [7,8]. One straightforward approach is to use the softmax value as an indicator of OOD detection [9,10].…”
Section: Introductionmentioning
confidence: 99%