In this paper, we present a two-stage ensemble-based approach to localize the anatomical structure of interest from magnetic resonance imaging (MRI) scans. We combine a Hough voting method with a convolutional neural network to automatically localize brain anatomical structures such as the hippocampus. The hippocampus is one of the regions that can be affected by the Alzheimer's disease, and this region is known to be related to memory loss. The structural changes of the hippocampus are important biomarkers for dementia. To analyze the structural changes, accurate localization plays a vital role. Furthermore, for segmentation and registration of anatomical structures, exact localization is desired. Our proposed models use a deep convolutional neural network (CNN) to calculate displacement vectors by exploiting the Hough voting strategy from multiple 3-viewpoint patch samples. The displacement vectors are added to the sample position to estimate the target position. To efficiently learn from samples, we employed a local and global strategy. The multiple global models were trained using randomly selected 3-viewpoint patches from the whole MRI scan. The results from global models are aggregated to obtain global predictions. Similarly, we trained multiple local models, extracting patches from the vicinity of the hippocampus location and assembling them to obtain a local prediction. The proposed models exploit the Alzheimer's disease neuroimaging initiative (ADNI) MRI dataset and the Gwangju Alzheimer's and related dementia (GARD) cohort MRI dataset for training, validating and testing. The average prediction error using the proposed two-stage ensemble Hough convolutional neural network (Hough-CNN) models are 2.32 and 2.25 mm for the left and right hippocampi, respectively, for 65 test MRIs from the GARD cohort dataset. Similarly, for the ADNI MRI dataset, the average prediction error for the left and right hippocampi are 2.31 and 2.04 mm, respectively, for 56 MRI scans.
There is ongoing research for the automatic diagnosis of Alzheimer's disease (AD) based on traditional machine learning techniques, and deep learning-based approaches are becoming a popular choice for AD diagnosis. The state-of-the-art techniques that consider multimodal diagnosis have been shown to have accuracy better than a manual diagnosis. However, collecting data from different modalities is time-consuming and expensive, and some modalities may have radioactive side effects. Our study is confined to structural magnetic resonance imaging (sMRI). The objectives of our attempt are as follows: 1) to increase the accuracy level that is comparable to the state-of-the-art methods; 2) to overcome the overfitting problem, and; 3) to analyze proven landmarks of the brain that provide discernible features for AD diagnosis. Here, we focused specifically on both the left and right hippocampus areas. To achieve the objectives, at first, we incorporate ensembles of simple convolutional neural networks (CNNs) as feature extractors and softmax cross-entropy as the classifier. Then, considering the scarcity of data, we deployed a patch-based approach. We have performed our experiment on the Gwangju Alzheimer's and Related Dementia (GARD) cohort dataset prepared by the National Research Center for Dementia (GARD), Gwangju, South Korea. We manually localized the left and right hippocampus and fed three view patches (TVPs) to the CNN after the preprocessing steps. We achieve 90.05% accuracy. We have compared our model with the state-of-the-art methods on the same dataset they have used and found our result comparable. INDEX TERMS Alzheimer disease classification, ALZHEIMER disease detection, Alzheimer disease diagnosis, convolutional neural network, deep learning, machine learning, medical imaging.
With increasing markets for fingerprint authentication, there are also increasing concerns about spoofs or synthetically produced fingerprint identifications that can bypass the authentication process. In this Letter, the authors introduce a new convolutional neural networks (CNNs) architecture for fingerprint liveness detection problem that can provide a more robust framework for network training and detection than previous methods. The proposed method employs squared regression error for each receptive field without the usage of the fully connected layer. Such structure provides following advantages from the previous liveness fingerprint CNN. First, unlike the previous techniques which rely on the pre-trained features, the proposed CNN can be trained directly from fingerprints as the loss is minimised for each receptive field. Second, in contrast to the cross-entropy layer, the squared error layer allows them to set up a threshold value that can control the acceptable level of false positives or false negatives. Third, the absence of a fully connected layer allows them to crop the input fingerprints such that a trade-off between accuracy and computation time can be made without the negative effects of re-scaling. The proposed CNN is shown to provide higher accuracy for three out of four datasets when evaluated against the state-of-the-art method.
The minimally invasive transcatheter aortic valve implantation (TAVI) is the most prevalent method to treat aortic valve stenosis. For pre-operative surgical planning, contrast-enhanced coronary CT angiography (CCTA) is used as the imaging technique to acquire 3-D measurements of the valve. Accurate localization of the eight aortic valve landmarks in CT images plays a vital role in the TAVI workflow because a small error risks blocking the coronary circulation. In order to examine the valve and mark the landmarks, physicians prefer a view parallel to the hinge plane, instead of using the conventional axial, coronal or sagittal view. However, customizing the view is a difficult and time-consuming task because of unclear aorta pose and different artifacts of CCTA. Therefore, automatic localization of landmarks can serve as a useful guide to the physicians customizing the viewpoint. In this paper, we present an automatic method to localize the aortic valve landmarks using colonial walk, a regression tree-based machine-learning algorithm. For efficient learning from the training set, we propose a two-phase optimized search space learning model in which a representative point inside the valvular area is first learned from the whole CT volume. All eight landmarks are then learned from a smaller area around that point. Experiment with preprocedural CCTA images of TAVI undergoing patients showed that our method is robust under high stenotic variation and notably efficient, as it requires only 12 milliseconds to localize all eight landmarks, as tested on a 3.60 GHz single-core CPU.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.