2022
DOI: 10.48550/arxiv.2212.07786
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Convergent Data-driven Regularizations for CT Reconstruction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…Here, potential generalizations also could incorporate alternative loss functions in the integrand of the reconstruction training loss, which might result in approximations of alternative estimators induced by Bayes costs [12]. In this context, as a starting point, one may limit oneself to linear estimators to further investigate the relation to learned MAP estimators, respectively Tikhonov regularization, as in [2,11]. In order to obtain convergence guarantees as well as data dependence, when desirable, one can explore a noise-controlled convex combination of both training losses.…”
Section: Discussion and Outlookmentioning
confidence: 99%
“…Here, potential generalizations also could incorporate alternative loss functions in the integrand of the reconstruction training loss, which might result in approximations of alternative estimators induced by Bayes costs [12]. In this context, as a starting point, one may limit oneself to linear estimators to further investigate the relation to learned MAP estimators, respectively Tikhonov regularization, as in [2,11]. In order to obtain convergence guarantees as well as data dependence, when desirable, one can explore a noise-controlled convex combination of both training losses.…”
Section: Discussion and Outlookmentioning
confidence: 99%
“…Our iResNet method using the diagonal architecture can be seen as a generalization of the learned linear spectral regularization considered in [22]. Using a different loss, which measures the reconstruction error, the authors of [22] obtain learned filter functions corresponding to Tikhonov regularization with data-and noise-dependent regularization parameters for each singular vector. An extension of the iResNet approach to this kind of training loss is thus desirable for future comparison.…”
Section: Discussion and Outlookmentioning
confidence: 99%
“…An earlier work [12] proposed a data-driven approach to a learned linear spectral regularization; more recently, [7,22] considered convergence aspects. In [22], the authors learn a scalar for each singular function direction in a filter-based reconstruction scheme, i.e. a linear regularization scheme.…”
Section: Related Workmentioning
confidence: 99%