2021
DOI: 10.48550/arxiv.2109.10254
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Uncertainty Toolbox: an Open-Source Library for Assessing, Visualizing, and Improving Uncertainty Quantification

Youngseog Chung,
Ian Char,
Han Guo
et al.
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
34
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 23 publications
(34 citation statements)
references
References 15 publications
0
34
0
Order By: Relevance
“…Each of the metrics presented in this section may correspond to first-or second-order statistics of the predictive model, or to the whole distribution (PDF or CDF). More information on evaluation metrics as well as comparative studies can be found in [107][108][109][110][111][112][113][114][115] and Appendix C.…”
Section: Evaluation: Accuracy and Uncertainty Quality Evaluationmentioning
confidence: 99%
See 3 more Smart Citations
“…Each of the metrics presented in this section may correspond to first-or second-order statistics of the predictive model, or to the whole distribution (PDF or CDF). More information on evaluation metrics as well as comparative studies can be found in [107][108][109][110][111][112][113][114][115] and Appendix C.…”
Section: Evaluation: Accuracy and Uncertainty Quality Evaluationmentioning
confidence: 99%
“…These approaches can also prove useful for addressing model misspecification cases, e.g., when a wrong data noise model has been utilized (see also Section A.4). More information on the active area of research related to calibration approaches for function approximation can be found in [50,109,114,[117][118][119][120][121] and Appendix C. Computational results can be found in Sections 6.1, 6.3, and 6.5.…”
Section: Post-training Improvement: Calibrationmentioning
confidence: 99%
See 2 more Smart Citations
“…This definition is similar to the definitions based on credible intervals that have been used in many prior works [26]. However, it has been suggested that this definition should be referred to as average calibration because it reflects the average calibration over the entire dataset rather than calibration on each individual data point [27], [28]. It is possible to have uninformative models with perfect average calibration.…”
Section: Uncertainty In Deep Learning-based Regressionmentioning
confidence: 99%