Purpose: To perform automatic assessment of dementia severity using a deep learning framework applied to resting-state functional magnetic resonance imaging (rs-fMRI) data.Method: We divided 133 Alzheimer’s disease (AD) patients with clinical dementia rating (CDR) scores from 0.5 to 3 into two groups based on dementia severity; the groups with very mild/mild (CDR: 0.5–1) and moderate to severe (CDR: 2–3) dementia consisted of 77 and 56 subjects, respectively. We used rs-fMRI to extract functional connectivity features, calculated using independent component analysis (ICA), and performed automated severity classification with three-dimensional convolutional neural networks (3D-CNNs) based on deep learning.Results: The mean balanced classification accuracy was 0.923 ± 0.042 (p < 0.001) with a specificity of 0.946 ± 0.019 and sensitivity of 0.896 ± 0.077. The rs-fMRI data indicated that the medial frontal, sensorimotor, executive control, dorsal attention, and visual related networks mainly correlated with dementia severity.Conclusions: Our CDR-based novel classification using rs-fMRI is an acceptable objective severity indicator. In the absence of trained neuropsychologists, dementia severity can be objectively and accurately classified using a 3D-deep learning framework with rs-fMRI independent components.
This paper reviews the second AIM learned ISP challenge and provides the description of the proposed solutions and results. The participating teams were solving a real-world RAW-to-RGB mapping problem, where to goal was to map the original low-quality RAW images captured by the Huawei P20 device to the same photos obtained with the Canon 5D DSLR camera. The considered task embraced a number of complex computer vision subtasks, such as image demosaicing, denoising, white balancing, color and contrast correction, demoireing, etc. The target metric used in this challenge combined fidelity scores (PSNR and SSIM) with solutions' perceptual results measured in a user study. The proposed solutions significantly improved the baseline results, defining the state-of-the-art for practical image signal processing pipeline modeling. * A. Ignatov and R. Timofte ({andrey,radu.timofte}@vision.ee.ethz.ch, ETH Zurich) are the challenge organizers, while the other authors participated in the challenge. The Appendix A contains the authors' teams and affiliations. AIM 2020 webpage: https://data.vision.ee.ethz.ch/cvl/aim20/
The analysis of fundus photograph is one of useful diagnosis tools for diverse retinal diseases such as diabetic retinopathy and hypertensive retinopathy. Specifically, the morphology of retinal vessels in patients is used as a measure of classification in retinal diseases and the automatic processing of fundus image has been investigated widely for diagnostic efficiency. The automatic segmentation of retinal vessels is essential and needs to precede computer-aided diagnosis system. In this study, we propose the method which implements patch-based pixel-wise segmentation with convolutional neural networks (CNNs) in fundus images for automatic retinal vessel segmentation. We construct the network composed of several modules which include convolutional layers and upsampling layers. Feature maps are made by modules and concatenated into a single feature map to capture coarse and fine structures of vessel simultaneously. The concatenated feature map is followed by a convolutional layer for performing a pixel-wise prediction. The performance of the proposed method is measured on DRIVE dataset. We show that our method is comparable to the results of other state-of-the-art algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.