Medical Imaging 2021: Image Processing 2021
DOI: 10.1117/12.2580169
|View full text |Cite
|
Sign up to set email alerts
|

Low-count PET image reconstruction with Bayesian inference over a Deep Prior

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…Examples include using a prior for the network parameters (and even using posterior distributions to quantify and make use of the uncertainty 70,71 ) and use of stochastic gradient Langevin dynamics (SGLD) for fully Bayesian posterior inference via the DIP. 72,73 A further approach has been Stein’s unbiased risk estimator (SURE) as the loss function for training the DIP (DIP-SURE) instead of using an MSE loss, again to avoid the overfitting problem. 74…”
Section: Ai Methods Without Training Datamentioning
confidence: 99%
“…Examples include using a prior for the network parameters (and even using posterior distributions to quantify and make use of the uncertainty 70,71 ) and use of stochastic gradient Langevin dynamics (SGLD) for fully Bayesian posterior inference via the DIP. 72,73 A further approach has been Stein’s unbiased risk estimator (SURE) as the loss function for training the DIP (DIP-SURE) instead of using an MSE loss, again to avoid the overfitting problem. 74…”
Section: Ai Methods Without Training Datamentioning
confidence: 99%
“…Unfortunately, MCMC has a comparable computational cost to Hessian-based approaches and their use is restricted to geophysical tomography [16,17], which has fewer time constraints. A potentially cheaper technique is the stochastic gradient Langevin dynamics algorithm, which has been applied to positron emission tomography reconstruction [18]. Stochastic gradient Langevin dynamics couples stochastic gradient descent to a normally distributed Monte-Carlo sampler.…”
Section: The Reviewmentioning
confidence: 99%
“…But DIP is vulnerable to overfitting and therefore requires manual intervention in early stopping or model under-parametrization. Consequently, a Bayesian neural network (BNN) approach to DIP has been considered to automate the prevention of overfitting [ 22 , 23 ]. Outperforming previous approaches, Posterior Temperature Optimized Bayesian Inverse Models (POTOBIM) have recently shown the most successful application of Bayesian DIP in the context of sparse-view reconstruction [ 24 ].…”
Section: Introductionmentioning
confidence: 99%