2005
DOI: 10.1088/0031-9155/51/2/004
|View full text |Cite
|
Sign up to set email alerts
|

Convergence study of an accelerated ML-EM algorithm using bigger step size

Abstract: In SPECT/PET, the maximum-likelihood expectation-maximization (ML-EM) algorithm is getting more attention as the speed of computers increases. This is because it can incorporate various physical aspects into the reconstruction process leading to a more accurate reconstruction than other analytical methods such as filtered-backprojection algorithms. However, the convergence rate of the ML-EM algorithm is very slow. Several methods have been developed to speed it up, such as the ordered-subset expectation-maximi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
41
0

Year Published

2010
2010
2018
2018

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 31 publications
(44 citation statements)
references
References 18 publications
3
41
0
Order By: Relevance
“…, . Although bigger step size is effective for accelerating IIR method, the algorithm diverges or oscillates if it is too big [12,13]. The value of ℎ was chosen to characterize the difference between lower-and higher-order discretization methods (i.e., OS-EM and RK, resp.)…”
Section: Numerical Examplementioning
confidence: 99%
See 2 more Smart Citations
“…, . Although bigger step size is effective for accelerating IIR method, the algorithm diverges or oscillates if it is too big [12,13]. The value of ℎ was chosen to characterize the difference between lower-and higher-order discretization methods (i.e., OS-EM and RK, resp.)…”
Section: Numerical Examplementioning
confidence: 99%
“…To accelerate the ML-EM algorithm, which has a drawback of slow convergence, the orderedsubset variation of the ML-EM, known as the orderedsubsets expectation-maximization (OS-EM) [7,8], is an effective and popular method. In addition to the OS-EM method, a larger power factor that does not cause divergence in the iterative process was introduced [9][10][11][12][13] to further accelerate the convergence rate. It was asserted [12] that a power-based ML-EM algorithm with increased power (step size) resulted in not only accelerating the convergence rate but also maximizing the likelihood values.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, an enlarged step size with the form of exponential power was proposed for ML-EM reconstruction. 50 The algorithm lacks monotonicity, hence may have a problem with stability. The idea of over-relaxed step size was also mentioned in Yu et al 30 for the ICD framework, where a factor of 1 to 2 was used to scale up the step size.…”
Section: E Computation Of the Solutionmentioning
confidence: 99%
“…(25) varied from 1 to K, a value larger than 1. That is, the optimal scaling factor ρ * , which produces the optimal step size to achieve the fastest convergence rate at each iteration, is not scaled down gradually, but needs to be successively increased iteratively, which is fundamentally different from other methods 50 where an exponential factor is supposed to be more conservative with increasing iterations.…”
Section: E Computation Of the Solutionmentioning
confidence: 99%