2023
DOI: 10.1101/2023.11.14.567028
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Calibrating Bayesian decoders of neural spiking activity

Ganchao Wei,
Zeinab Tajik Mansouri,
Xiaojing Wang
et al.

Abstract: Accurately decoding external variables from observations of neural activity is a major challenge in systems neuroscience. Bayesian decoders, that provide probabilistic estimates, are some of the most widely used. Here we show how, in many common settings, the probabilistic predictions made by traditional Bayesian decoders are overconfident. That is, the estimates for the decoded stimulus or movement variables are more certain than they should be. We then show how Bayesian decoding with latent variables, taking… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 143 publications
0
2
0
Order By: Relevance
“…It is well-established that both modern deep learning models ( Guo et al, 2017 ) and traditional Bayesian decoders ( Wei et al, 2023 ) can make overconfident predictions. In contrast, trustworthy, well-calibrated models should exhibit high uncertainty when predictions are likely to be inaccurate and low uncertainty when they are likely to be accurate.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…It is well-established that both modern deep learning models ( Guo et al, 2017 ) and traditional Bayesian decoders ( Wei et al, 2023 ) can make overconfident predictions. In contrast, trustworthy, well-calibrated models should exhibit high uncertainty when predictions are likely to be inaccurate and low uncertainty when they are likely to be accurate.…”
Section: Discussionmentioning
confidence: 99%
“…To investigate the statistical calibration in VAEs, we perform a version of simulation-based calibration ( Talts et al, 2018 ; Cook et al, 2006 ), which is associated to frequentist coverage tests ( Wei et al, 2023 ). Here, we focus on the calibration of the predictive distribution in data space.…”
Section: Methodsmentioning
confidence: 99%