2001
DOI: 10.1117/1.1352753
|View full text |Cite
|
Sign up to set email alerts
|

Use of penalty terms in gradient-based iterative reconstruction schemes for optical tomography

Abstract: It is well known that the reconstruction problem in optical tomography is ill-posed. In other words, many different spatial distributions of optical properties inside the medium can lead to the same detector readings on the surface of the medium under consideration. Therefore, the choice of an appropriate method to overcome this problem is of crucial importance for any successful optical tomographic image reconstruction algorithm. In this work we approach the problem within a gradient-based iterative image rec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
46
0

Year Published

2002
2002
2013
2013

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 53 publications
(46 citation statements)
references
References 38 publications
0
46
0
Order By: Relevance
“…[8][9][10][11][12] to incorporate a priori information. Introducing penalty functions 11 and uniform [2][3][4][5] or spatially varying regularization 12 terms in the inverse problem formulation is another way of incorporating a priori information. However, in these studies, rather than specific information about the unknown image, generic probability density functions or regularization terms have been used.…”
Section: Related Workmentioning
confidence: 99%
“…[8][9][10][11][12] to incorporate a priori information. Introducing penalty functions 11 and uniform [2][3][4][5] or spatially varying regularization 12 terms in the inverse problem formulation is another way of incorporating a priori information. However, in these studies, rather than specific information about the unknown image, generic probability density functions or regularization terms have been used.…”
Section: Related Workmentioning
confidence: 99%
“…For example, Tikhonov regularization (11) involves a regularization parameter that is added to the diagonal of the matrix to be inverted to achieve diagonal dominance; this parameter is spatially invariant and must be carefully selected to ensure convergence, or can be updated dynamically by using heuristics, as in the Levenberg-Marquardt method (12). Improved results have been demonstrated by using spatially variant regularization based on a priori assumptions of system noise (13), or by using a penalty function based on a priori assumptions on the final parameter distribution (10). Others have used estimates of measurement error based on shot-noise statistics to weight measurements (14).…”
mentioning
confidence: 99%
“…10). For example, Tikhonov regularization (11) involves a regularization parameter that is added to the diagonal of the matrix to be inverted to achieve diagonal dominance; this parameter is spatially invariant and must be carefully selected to ensure convergence, or can be updated dynamically by using heuristics, as in the Levenberg-Marquardt method (12).…”
mentioning
confidence: 99%
“…Some studies [20][21][22] used gradient-based iterative image reconstruction schemes consisting of the minimization of an appropriately defined objective function separated into both least squares of errors and additional penalty terms containing a priori information. This gradient-based iterative image reconstruction method uses the gradient of the objective function in a line-minimization scheme to provide subsequent guesses of the spatial distribution of the optical properties for the forward model, and the reconstruction of these properties is completed once a minimum of this objective function is found.…”
Section: Introductionmentioning
confidence: 99%