2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.00823
|View full text |Cite
|
Sign up to set email alerts
|

The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
439
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 705 publications
(525 citation statements)
references
References 13 publications
3
439
1
Order By: Relevance
“…We extend the work of Wang et al [1] by introducing a novel loss function, using a diversity regularizer, and prepending a parametrized input transformation module to the network. We show that our approach outperform previous works and make pretrained models robust against common corruptions on image classification benchmarks as ImageNet-C [8] and ImageNet-R [9]. Sun et al [10] investigate test-time adaptation using a self-supervision task.…”
Section: Introductionmentioning
confidence: 83%
See 4 more Smart Citations
“…We extend the work of Wang et al [1] by introducing a novel loss function, using a diversity regularizer, and prepending a parametrized input transformation module to the network. We show that our approach outperform previous works and make pretrained models robust against common corruptions on image classification benchmarks as ImageNet-C [8] and ImageNet-R [9]. Sun et al [10] investigate test-time adaptation using a self-supervision task.…”
Section: Introductionmentioning
confidence: 83%
“…This is beneficial because any good performing pretrained network can be readily reused, e.g., a network trained on some proprietary data not available to the public. We show, that our method significantly improves performance on models that are trained on clean ImageNet data such as a ResNet50 [13], as well as robust models such as ResNet50 models trained using DeepAugment+AugMix [9]. In summary our main contributions are as follows: we propose non-saturating losses based on the negative log likelihood ratio, such that gradients from high confidence predictions still contribute to test-time adaptation.…”
Section: Introductionmentioning
confidence: 92%
See 3 more Smart Citations