Accurate and early diagnosis of Alzheimer's disease (AD) plays important role for patient care and development of future treatment. Structural and functional neuroimages, such as magnetic resonance images (MRI) and positron emission tomography (PET), are providing powerful imaging modalities to help understand the anatomical and functional neural changes related to AD. In recent years, machine learning methods have been widely studied on analysis of multi-modality neuroimages for quantitative evaluation and computer-aided-diagnosis (CAD) of AD. Most existing methods extract the hand-craft imaging features after image preprocessing such as registration and segmentation, and then train a classifier to distinguish AD subjects from other groups. This paper proposes to construct cascaded convolutional neural networks (CNNs) to learn the multi-level and multimodal features of MRI and PET brain images for AD classification. First, multiple deep 3D-CNNs are constructed on different local image patches to transform the local brain image into more compact high-level features. Then, an upper high-level 2D-CNN followed by softmax layer is cascaded to ensemble the high-level features learned from the multi-modality and generate the latent multimodal correlation features of the corresponding image patches for classification task. Finally, these learned features are combined by a fully connected layer followed by softmax layer for AD classification. The proposed method can automatically learn the generic multi-level and multimodal features from multiple imaging modalities for classification, which are robust to the scale and rotation variations to some extent. No image segmentation and rigid registration are required in pre-processing the brain images. Our method is evaluated on the baseline MRI and PET images of 397 subjects including 93 AD patients, 204 mild cognitive impairment (MCI, 76 pMCI +128 sMCI) and 100 normal controls (NC) from Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Experimental results show that the proposed method achieves an accuracy of 93.26% for classification of AD vs. NC and 82.95% for classification pMCI vs. NC, demonstrating the promising classification performance.
The existing endoscope brings too much discomfort to patients because its slim and rigid rod is difficult to pass through alpha, gamma loop of the human intestine. A robotic endoscope, as a novel solution, is expected to replace the current endoscope in clinic. A microrobotic endoscope based on wireless power supply was developed in this paper. This robot is mainly composed of a locomotion mechanism, a wireless power supply subsystem, and a communication subsystem. The locomotion mechanism is composed of three liner-driving cells connected with each other through a two-freedom universal joint. The wireless power supply subsystem is composed of a resonance transmit coil to transmit an alternating magnetic field, and a secondary coil to receive the power. Wireless communication system could transmit the image to the monitor, or send the control commands to the robot. The whole robot was packaged in the waterproof bellows. Activating the three driving cells under some rhythm, the robot could creep forward or backward as a worm. A mathematic model is built to express the energy coupling efficiency. Some experiments are performed to test the efficiency and the capability of energy transferring. The results show the wireless energy supply has enough power capacity. The velocity and the navigation ability in a pig intestine were measured in in vitro experiments. The results demonstrated this robot can navigate the intestine easily. In general, the wireless power supply and the wireless communication remove the need of a connecting wire and improve the motion flexibility. Meanwhile, the presented locomotion mechanism and principle have a high reliability and a good adaptability to the in vitro intestine. This research has laid a good foundation for the real application of the robotic endoscope in the future.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.