2015
DOI: 10.1016/j.neuroimage.2015.03.009
|View full text |Cite
|
Sign up to set email alerts
|

MR-based attenuation correction for PET/MRI neurological studies with continuous-valued attenuation coefficients for bone through a conversion from R2* to CT-Hounsfield units

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

6
102
0
1

Year Published

2015
2015
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 78 publications
(109 citation statements)
references
References 28 publications
6
102
0
1
Order By: Relevance
“…In our study, the relative error and absolute relative error of ZTE-AC across all 670 VOIs were 20.09% 6 2.26% and 1.77% 6 1.41%, respectively, which are generally comparable to other studies (9,(22)(23)(24). Previously reported absolute relative percentage errors of PET images range from 1.38% 6 4.52% to 2.55% 6 0.86% (9,(22)(23)(24), though care should be taken when comparing these studies and the present study, because of analysis variations.…”
Section: Discussionsupporting
confidence: 90%
See 1 more Smart Citation
“…In our study, the relative error and absolute relative error of ZTE-AC across all 670 VOIs were 20.09% 6 2.26% and 1.77% 6 1.41%, respectively, which are generally comparable to other studies (9,(22)(23)(24). Previously reported absolute relative percentage errors of PET images range from 1.38% 6 4.52% to 2.55% 6 0.86% (9,(22)(23)(24), though care should be taken when comparing these studies and the present study, because of analysis variations.…”
Section: Discussionsupporting
confidence: 90%
“…The first family is that of template-/atlas-/model-based approaches (5)(6)(7). The second one is that of segmentation approaches (8)(9)(10)(11)(12). The third one constitutes methods directly estimating attenuation information from emission data (13,14).…”
mentioning
confidence: 99%
“…Results similar to ours were also presented by Burgos et al (2014) and Izquierdo-Garcia et al (2014) using atlas-based approaches that predict CT HU from T1w images, Johansson et al (2014) with a machine learning method based on a mixture of Gaussians using two pairs of UTE images, Navalpakkam et al (2013) with a support vector regression method using Dixon image and a pair of UTE images, and Juttukonda et al (2015) using the R * 2 signal to model the individual bone density. The mean error (ME) or mean absolute error (MAE) reported for the full brain region on PET is for Burgos: 0.2%/2.9% (ME/MAE, 41 subjects), Izquierdo-Garcia: −1.2% (ME, 15 subjects), Johansson: 1.9% (ME, 8 subjects) (Larsson et al 2013), Navalpakkam: 2.4% (MAE, 5 subjects), and Juttukonda: 2.6% (MAE, 98 subjects).…”
Section: Discussionsupporting
confidence: 83%
“…We use a low threshold for included bone (R * 2 > 100 s −1 ), which is lower than that in the literature (Keereman et al 2010, Juttukonda et al 2015. We chose to use the lower threshold to capture the full width of the bone, and avoid discontinuities in the skull, as opposed to existing UTE based segmentation methods (Dickson et al 2014, Juttukonda et al 2015. The low threshold might introduce a bias, as the amount of included bone can be locally overestimated, especially in areas of complex air, bone, and tissue combinations.…”
Section: Introductionmentioning
confidence: 99%
“…The creation of such parametric images can be divided into [208], (E) Anazodo et al [209], (F) Izquierdo-Garcia et al [210], (G) Burgos et al [211], (H) Merida et al [212]; MLAA-based methods: (I) Benoit et al [213]; Segmentation-based methods: (J) Cabello et al [214], (K) Juttukonda et al [215], (L) Ladefoged et al [216]. Graphical analyses methods include the Gjedde-Patlak equation [251,252], which describes irreversibly bound tracers, such as [ 18 F]FDG and the Logan-plot, which can be used to describe the reversibly bound tracers [253].…”
Section: Kinetic Modeling and Image Derived Input Function (Idif)mentioning
confidence: 99%