2021
DOI: 10.48550/arxiv.2107.08976
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

OODformer: Out-Of-Distribution Detection Transformer

Abstract: A serious problem in image classification is that a trained model might perform well for input data that originates from the same distribution as the data available for model training, but performs much worse for out-of-distribution (OOD) samples. In real-world safety-critical applications, in particular, it is important to be aware if a new data point is OOD. To date, OOD detection is typically addressed using either confidence scores, autoencoder based reconstruction, or by contrastive learning. However, glo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 9 publications
(13 citation statements)
references
References 32 publications
0
5
0
Order By: Relevance
“…Extensive research exists on CNN-based out-of-distribution detection approaches in medical imaging [431]- [435]. Recently, few attempts have been made to show that large-scale pretrained ViTs, due to their high-quality representations, can significantly improve the state-of-the-art on a range of out-ofdistribution tasks across different data modalities [386], [430], [436]. However, investigation in these works has been mostly carried out on toy datasets such as CIFAR-10 and CIFAR-100, therefore not necessarily reflecting out-of-distribution detection performance on medical images with complex textures and patterns, high variance in feature scale (like in Xray images), and local specific features.…”
Section: Domain Adaptation and Out-of-distribution Detectionmentioning
confidence: 99%
“…Extensive research exists on CNN-based out-of-distribution detection approaches in medical imaging [431]- [435]. Recently, few attempts have been made to show that large-scale pretrained ViTs, due to their high-quality representations, can significantly improve the state-of-the-art on a range of out-ofdistribution tasks across different data modalities [386], [430], [436]. However, investigation in these works has been mostly carried out on toy datasets such as CIFAR-10 and CIFAR-100, therefore not necessarily reflecting out-of-distribution detection performance on medical images with complex textures and patterns, high variance in feature scale (like in Xray images), and local specific features.…”
Section: Domain Adaptation and Out-of-distribution Detectionmentioning
confidence: 99%
“…Another line of research assumes that auxiliary data sets are available during the training process. For example, Hendrycks et al ( 2019a ) let classifiers learn OOD example samples before generalizing to other types of OOD data; Mohseni et al ( 2020 ) uses auxiliary data to train OOD detectors; and more recently, Fort et al ( 2021 ) and Koner et al ( 2021 ) fine-tune and improve on the pre-trained Transtormer model.…”
Section: Related Workmentioning
confidence: 99%
“…Transformer in Vision: In recent times, transformer-based architectures have emerged as the defacto gold standard model for various multi-domain and multi-modal tasks such as image classification [13], object detection [7], and out-of-distribution detection [26]. DETR [7] proposed an end-to-end transformer-based object detection approach with learnable object queries ([obj]-tokens) and direct set-based prediction.…”
Section: Related Workmentioning
confidence: 99%