2021
DOI: 10.48550/arxiv.2106.04015
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Uncertainty Baselines: Benchmarks for Uncertainty & Robustness in Deep Learning

Abstract: High-quality estimates of uncertainty and robustness are crucial for numerous real-world applications, especially for deep learning which underlies many deployed ML systems. The ability to compare techniques for improving these estimates is therefore very important for research and practice alike. Yet, competitive comparisons of methods are often lacking due to a range of reasons, including: compute availability for extensive tuning, incorporation of sufficiently many baselines, and concrete documentation for … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 31 publications
(34 citation statements)
references
References 17 publications
0
26
0
Order By: Relevance
“…We also compare against the Posterior Network model [8], which also offers distance-aware uncertainties, but it only applies to problems with few classes. Our non-synthetic experiments are developed within the open source codebases; uncertainty baselines [43] and robustness metrics [16] (to assess the OOD performances). Implementation details are deferred to Appendix A.1.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We also compare against the Posterior Network model [8], which also offers distance-aware uncertainties, but it only applies to problems with few classes. Our non-synthetic experiments are developed within the open source codebases; uncertainty baselines [43] and robustness metrics [16] (to assess the OOD performances). Implementation details are deferred to Appendix A.1.…”
Section: Methodsmentioning
confidence: 99%
“…For most baselines, we used the hyperparameters from the uncertainty baselines library [43]. On CIFAR, we trained our HetSNGP with a learning rate of 0.1 for 300 epochs and used R = 6 factors for the heteroscedastic covariance, a softmax temperature of τ = 0.5 and S = 5000 Monte Carlo samples.…”
Section: A Appendixmentioning
confidence: 99%
“…Alternative methods are based, indicatively, on ensembles of NN optimization iterates or independently trained NNs [39][40][41][42][43][44][45][46][47][48][49][50], as well as on the evidential framework [51][52][53][54][55][56][57][58][59]. Although Bayesian methods and ensembles are thoroughly discussed in this paper, the interested reader is also directed to the recent review studies in [60][61][62][63][64][65][66][67][68][69][70][71][72] for more information. Clearly, in the context of SciML, which may involve differential equations with unknown or uncertain terms and parameters, UQ becomes an even more demanding task; see Fig.…”
Section: Motivation and Scope Of The Papermentioning
confidence: 99%
“…To address this, we present Uncertainty Toolbox: an opensource python library that helps to assess, visualize, and improve UQ. There are other libraries such as Uncertainty Baselines (Nado et al, 2021) and Robustness Metrics (Djolonga et al, 2020) that focus on aspects of UQ in the classification setting. Uncertainty Toolbox focuses on the regres-sion setting and additionally aims to provide user-friendly utilities such as visualizations, a glossary of terms, and an organized collection of key paper references.…”
Section: Introductionmentioning
confidence: 99%