PurposeThe clinical utility of FDG-PET in diagnosing frontotemporal dementia (FTD) has been well demonstrated over the past decades. On the contrary, the diagnostic value of arterial spin labelling (ASL) MRI – a relatively new technique – in clinical diagnosis of FTD has yet to be confirmed. Using simultaneous PET/MRI, we evaluated the diagnostic performance of ASL in identifying pathological abnormalities in FTD (FTD) to determine whether ASL can provide similar diagnostic value as FDG-PET.MethodsASL and FDG-PET images were compared in 10 patients with FTD and 10 healthy older adults. Qualitative and quantitative measures of diagnostic equivalency were used to determine the diagnostic utility of ASL compared to FDG-PET. Sensitivity, specificity, and inter-rater reliability were calculated for each modality from scores of subjective visual ratings and from analysis of regional mean values in thirteen a priori regions of interest (ROI). To determine the extent of concordance between modalities in each patient, individual statistical maps generated from comparison of each patient to controls were compared between modalities using the Jaccard similarity index (JI).ResultsVisual assessments revealed lower sensitivity, specificity and inter-rater reliability for ASL (66.67%/62.12%/0.2) compared to FDG-PET (88.43%/90.91%/0.61). Across all regions, ASL performed lower than FDG-PET in discriminating patients from controls (areas under the receiver operating curve: ASL = 0.75 and FDG-PET = 0.87). In all patients, ASL identified patterns of reduced perfusion consistent with FTD, but areas of hypometabolism exceeded hypoperfused areas (group-mean JI = 0.30 ± 0.22).ConclusionThis pilot study demonstrated that ASL can detect similar spatial patterns of abnormalities in individual FTD patients compared to FDG-PET, but its sensitivity and specificity for discriminant diagnosis of a patient from healthy individuals remained unmatched to FDG-PET. Further studies at the individual level are required to confirm the clinical role of ASL in FTD management.
Cardiac left ventricle (LV) quantification is among the most clinically important tasks for identification and diagnosis of cardiac diseases, yet still a challenge due to the high variability of cardiac structure and the complexity of temporal dynamics. Full quantification, i.e., to simultaneously quantify all LV indices including two areas (cavity and myocardium), six regional wall thicknesses (RWT), three LV dimensions, and one cardiac phase, is even more challenging since the uncertain relatedness intra and inter each type of indices may hinder the learning procedure from better convergence and generalization. In this paper, we propose a newly-designed multitask learning network (FullLVNet), which is constituted by a deep convolution neural network (CNN) for expressive feature embedding of cardiac structure; two followed parallel recurrent neural network (RNN) modules for temporal dynamic modeling; and four linear models for the final estimation. During the final estimation, both intra-and inter-task relatedness are modeled to enforce improvement of generalization: 1) respecting intra-task relatedness, group lasso is applied to each of the regression tasks for sparse and common feature selection and consistent prediction; 2) respecting inter-task relatedness, three phase-guided constraints are proposed to penalize violation of the temporal behavior of the obtained LV indices. Experiments on MR sequences of 145 subjects show that FullLVNet achieves high accurate prediction with our intra-and inter-task relatedness, leading to MAE of 190mm 2 , 1.41mm, 2.68mm for average areas, RWT, dimensions and error rate of 10.4% for the phase classification. This endows our method a great potential in comprehensive clinical assessment of global, regional and dynamic cardiac function.
Accurate estimation of regional wall thicknesses (RWT) of left ventricular (LV) myocardium from cardiac MR sequences is of significant importance for identification and diagnosis of cardiac disease. Existing RWT estimation still relies on segmentation of LV myocardium, which requires strong prior information and user interaction. No work has been devoted into direct estimation of RWT from cardiac MR images due to the diverse shapes and structures for various subjects and cardiac diseases, as well as the complex regional deformation of LV myocardium during the systole and diastole phases of the cardiac cycle. In this paper, we present a newly proposed Residual Recurrent Neural Network (ResRNN) that fully leverages the spatial and temporal dynamics of LV myocardium to achieve accurate frame-wise RWT estimation. Our ResRNN comprises two paths: 1) a feed forward convolution neural network (CNN) for effective and robust CNN embedding learning of various cardiac images and preliminary estimation of RWT from each frame itself independently, and 2) a recurrent neural network (RNN) for further improving the estimation by modeling spatial and temporal dynamics of LV myocardium. For the RNN path, we design for cardiac sequences a Circle-RNN to eliminate the effect of null hidden input for the first time-step. Our ResRNN is capable of obtaining accurate estimation of cardiac RWT with Mean Absolute Error of 1.44mm (less than 1-pixel error) when validated on cardiac MR sequences of 145 subjects, evidencing its great potential in clinical cardiac function assessment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.