2022
DOI: 10.1007/978-3-031-19775-8_18
|View full text |Cite
|
Sign up to set email alerts
|

BayesCap: Bayesian Identity Cap for Calibrated Uncertainty in Frozen Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(12 citation statements)
references
References 47 publications
0
12
0
Order By: Relevance
“…We use the Eigen split [12] with the maximum depth set to 80 meters for evaluation. In [46], a small OOD evaluation is conducted with KITTI as in-distribution and Places365 [55] and India Driving [47] as OOD. In contrast to this evaluation, we suggest using two different in-distribution datasets and treating the OOD datasets as separate settings instead of combining them.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…We use the Eigen split [12] with the maximum depth set to 80 meters for evaluation. In [46], a small OOD evaluation is conducted with KITTI as in-distribution and Places365 [55] and India Driving [47] as OOD. In contrast to this evaluation, we suggest using two different in-distribution datasets and treating the OOD datasets as separate settings instead of combining them.…”
Section: Methodsmentioning
confidence: 99%
“…Another predictive method infers a prior distribution over the model output by log-likelihood maximization [39]. In [46], the prior output distribution is learned with a second model from a fixed and already trained image-to-image translation model. In contrast, we train a second decoder in a post hoc fashion to reconstruct the input image from the features of a depth encoder and thus learn its output distribution.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In the case of distribution P (J|I) in the above equation, recent studies have suggested that, in the presence of outliers and human factors, the residuals (the difference between predicted values and ground-truth values) of each pixel typically follow a heavy-tailed distribution [19], [33], [39]. Therefore, to make our model robust to various outliers, we model the heteroscedastic generalized Gaussian distribution to simulate the heavy-tailed distribution of the output.…”
Section: B Uncertainty Estimationmentioning
confidence: 99%
“…The quality of results obtained from tests and measurements is crucial in practical vision-based measurement applications such as autonomous driving and medical imaging, and estimating uncertainty provides an effective means to assess outcome quality [16]- [18]. Well-calibrated uncertainty enables experts or devices to intervene in highly uncertain predictions, thereby enhancing decision-making reliability [19]- [21]. Therefore, researchers are encouraged to address the uncertainty in deep learning in order to enhance the reliability of measurement systems based on deep learning [16].…”
Section: Introductionmentioning
confidence: 99%