Brain parcellation tools based on multiple-atlas algorithms have recently emerged as a promising method with which to accurately define brain structures. When dealing with data from various sources, it is crucial that these tools are robust for many different imaging protocols. In this study, we tested the robustness of a multiple-atlas, likelihood fusion algorithm using Alzheimer’s Disease Neuroimaging Initiative (ADNI) data with six different protocols, comprising three manufacturers and two magnetic field strengths. The entire brain was parceled into five different levels of granularity. In each level, which defines a set of brain structures, ranging from eight to 286 regions, we evaluated the variability of brain volumes related to the protocol, age, and diagnosis (healthy or Alzheimer’s disease). Our results indicated that, with proper pre-processing steps, the impact of different protocols is minor compared to biological effects, such as age and pathology. A precise knowledge of the sources of data variation enables sufficient statistical power and ensures the reliability of an anatomical analysis when using this automated brain parcellation tool on datasets from various imaging protocols, such as clinical databases.
To develop a deep learning-based reconstruction framework for ultrafast and robust diffusion tensor imaging and fiber tractography. Methods: SuperDTI was developed to learn the nonlinear relationship between DWIs and the corresponding diffusion tensor parameter maps. It bypasses the tensor fitting procedure, which is highly susceptible to noises and motions in DWIs. The network was trained and tested using data sets from the Human Connectome Project and patients with ischemic stroke. Results from SuperDTI were compared against widely used methods for tensor parameter estimation and fiber tracking. Results: Using training and testing data acquired using the same protocol and scanner, SuperDTI was shown to generate fractional anisotropy and mean diffusivity maps, as well as fiber tractography, from as few as six raw DWIs, with a quantification error of less than 5% in all white-matter and gray-matter regions of interest.It was robust to noises and motions in the testing data. Furthermore, the network trained using healthy volunteer data showed no apparent reduction in lesion detectability when directly applied to stroke patient data. Conclusions: Our results demonstrate the feasibility of superfast DTI and fiber tractography using deep learning with as few as six DWIs directly, bypassing tensor fitting. Such a significant reduction in scan time may allow the inclusion of DTI into the clinical routine for many potential applications.
We explored the performance of structure-based computational analysis in four neurodegenerative conditions [Ataxia (AT, n = 16), Huntington's Disease (HD, n = 52), Alzheimer's Disease (AD, n = 66), and Primary Progressive Aphasia (PPA, n = 50)], all characterized by brain atrophy. The independent variables were the volumes of 283 anatomical areas, derived from automated segmentation of T1-high resolution brain MRIs. The segmentation based volumetric quantification reduces image dimensionality from the voxel level [on the order of O(106)] to anatomical structures [O(102)] for subsequent statistical analysis. We evaluated the effectiveness of this approach on extracting anatomical features, already described by human experience and a priori biological knowledge, in specific scenarios: (1) when pathologies were relatively homogeneous, with evident image alterations (e.g., AT); (2) when the time course was highly correlated with the anatomical changes (e.g., HD), an analogy for prediction; (3) when the pathology embraced heterogeneous phenotypes (e.g., AD) so the classification was less efficient but, in compensation, anatomical and clinical information were less redundant; and (4) when the entity was composed of multiple subgroups that had some degree of anatomical representation (e.g., PPA), showing the potential of this method for the clustering of more homogeneous phenotypes that can be of clinical importance. Using the structure-based quantification and simple linear classifiers (partial least square), we achieve 87.5 and 73% of accuracy on differentiating AT and pre-symptomatic HD patents from controls, respectively. More importantly, the anatomical features automatically revealed by the classifiers agreed with the patterns previously described on these pathologies. The accuracy was lower (68%) on differentiating AD from controls, as AD does not display a clear anatomical phenotype. On the other hand, the method identified PPA clinical phenotypes and their respective anatomical signatures. Although most of the data are presented here as proof of concept in simulated clinical scenarios, structure-based analysis was potentially effective in characterizing phenotypes, retrieving relevant anatomical features, predicting prognosis, and aiding diagnosis, with the advantage of being easily translatable to clinics and understandable biologically.
With the rapid growth of medical big data, medical signal processing measurement techniques are facing severe challenges. Enormous medical images are constantly generated by various health monitoring and sensing devices, such as ultrasound, MRI machines. Hence, based on pulse coupled neural network (PCNN) and the classical visual receptive field (CVRF) with the difference of two Gaussians (DOG), a contrast enhancement of MRI image is suggested to improve the accuracy of clinical diagnosis for smarter mobile healthcare. As one premise, the parameters of DOG are estimated from the fundamentals of CVRF; then the PCNN parameters in image enhancement are estimated eventually with the help of DOG. As a result, the MRI images can be enhanced adaptively. Due to the exponential decay of the dynamic threshold and the pulses coupling among neurons, PCNN effectively enhances the contrast of low grey levels in MRI image. Moreover, because of the inhibitory effects from inhibitory region in CVRF, PCNN also effectively preserves the structures such as edges for enhanced results. Experiments on several MRI images show that the proposed method performs better than other methods by improving contrast and preserving structures well.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.