Amyloid brain positron emission tomography (PET) images are visually and subjectively analyzed by the physician with a lot of time and effort to determine the β-Amyloid (Aβ) deposition. We designed a convolutional neural network (CNN) model that predicts the Aβ-positive and Aβ-negative status. We performed 18F-florbetaben (FBB) brain PET on controls and patients (n=176) with mild cognitive impairment and Alzheimer's Disease (AD). We classified brain PET images visually as per the on the brain amyloid plaque load score. We designed the visual geometry group (VGG16) model for the visual assessment of slice-based samples. To evaluate only the gray matter and not the white matter, gray matter masking (GMM) was applied to the slice-based standard samples. All the performance metrics were higher with GMM than without GMM (accuracy 92.39 vs. 89.60, sensitivity 87.93 vs. 85.76, and specificity 98.94 vs. 95.32). For the patientbased standard, all the performance metrics were almost the same (accuracy 89.78 vs. 89.21), lower (sensitivity 93.97 vs. 99.14), and higher (specificity 81.67 vs. 70.00). The area under curve with the VGG16 model that observed the gray matter region only was slightly higher than the model that observed the whole brain for both slice-based and patient-based decision processes. Amyloid brain PET images can be appropriately analyzed using the CNN model for predicting the Aβ-positive and Aβ-negative status.
BackgroundAlthough amyloid beta (Aβ) imaging is widely used for diagnosing and monitoring Alzheimer’s disease in clinical fields, paralleling comparison between 18F-flutemetamol and 18F-florbetaben was rarely attempted in AD mouse model. We performed a comparison of Aβ PET images between 18F-flutemetamol and 18F-florbetaben in a recently developed APPswe mouse model, C57BL/6-Tg (NSE-hAPPsw) Korl.ResultsAfter an injection (0.23 mCi) of 18F-flutemetamol and 18F-florbetaben at a time interval of 2–3 days, we compared group difference of SUVR and kinetic parameters between the AD (n = 7) and control (n = 7) mice, as well as between 18F-flutemetamol and 18F-florbetaben image. In addition, bio-distribution and histopathology were conducted. With visual image and VOI-based SUVR analysis, the AD group presented more prominent uptake than did the control group in both the 18F-florbetaben and 18F-flutemetamol images. With kinetic analysis, the 18F-florbetaben images showed differences in K1 and k4 between the AD and control groups, although 18F-flutemetamol images did not show significant difference. 18F-florbetaben images showed more prominent cortical uptake and matched well to the thioflavin S staining images than did the 18F-flutemetamol image. In contrast, 18F-flutemetamol images presented higher K1, k4, K1/k2 values than those of 18F-florbetaben images. Also, 18F-flutemetamol images presented prominent uptake in the bowel and bladder, consistent with higher bio-distribution in kidney, lung, blood and heart.ConclusionsCompared with 18F-flutemetamol images, 18F-florbetaben images showed prominent visual uptake intensity, SUVR, and higher correlations with the pathology. In contrast, 18F-flutemetamol was more actively metabolized than was 18F-florbetaben (Son et al. in J Nucl Med 58(Suppl 1):S278, 2017].Electronic supplementary materialThe online version of this article (10.1186/s12868-018-0447-7) contains supplementary material, which is available to authorized users.
Conventional data augmentation (DA) techniques, which have been used to improve the performance of predictive models with a lack of balanced training data sets, entail an effort to define the proper repeating operation (e.g., rotation and mirroring) according to the target class distribution. Although DA using generative adversarial network (GAN) has the potential to overcome the disadvantages of conventional DA, there are not enough cases where this technique has been applied to medical images, and in particular, not enough cases where quantitative evaluation was used to determine whether the generated images had enough realism and diversity to be used for DA. In this study, we synthesized 18F-Florbetaben (FBB) images using CGAN. The generated images were evaluated using various measures, and we presented the state of the images and the similarity value of quantitative measurement that can be expected to successfully augment data from generated images for DA. The method includes (1) conditional WGAN-GP to learn the axial image distribution extracted from pre-processed 3D FBB images, (2) pre-trained DenseNet121 and model-agnostic metrics for visual and quantitative measurements of generated image distribution, and (3) a machine learning model for observing improvement in generalization performance by generated dataset. The Visual Turing test showed similarity in the descriptions of typical patterns of amyloid deposition for each of the generated images. However, differences in similarity and classification performance per axial level were observed, which did not agree with the visual evaluation. Experimental results demonstrated that quantitative measurements were able to detect the similarity between two distributions and observe mode collapse better than the Visual Turing test and t-SNE.
Our purpose in this study is to evaluate the clinical feasibility of deep-learning techniques for F-18 florbetaben (FBB) positron emission tomography (PET) image reconstruction using data acquired in a short time. We reconstructed raw FBB PET data of 294 patients acquired for 20 and 2 min into standard-time scanning PET (PET20m) and short-time scanning PET (PET2m) images. We generated a standard-time scanning PET-like image (sPET20m) from a PET2m image using a deep-learning network. We did qualitative and quantitative analyses to assess whether the sPET20m images were available for clinical applications. In our internal validation, sPET20m images showed substantial improvement on all quality metrics compared with the PET2m images. There was a small mean difference between the standardized uptake value ratios of sPET20m and PET20m images. A Turing test showed that the physician could not distinguish well between generated PET images and real PET images. Three nuclear medicine physicians could interpret the generated PET image and showed high accuracy and agreement. We obtained similar quantitative results by means of temporal and external validations. We can generate interpretable PET images from low-quality PET images because of the short scanning time using deep-learning techniques. Although more clinical validation is needed, we confirmed the possibility that short-scanning protocols with a deep-learning technique can be used for clinical applications.
Alzheimer's disease (AD) is an irreversible progressive cerebral disease with most of its symptoms appearing after 60 years of age. Alzheimer's disease has been largely attributed to accumulation of amyloid beta (Aβ), but a complete cure has remained elusive. 18F-Florbetaben amyloid positron emission tomography (PET) has been shown as a more powerful tool for understanding AD-related brain changes than magnetic resonance imaging and computed tomography. In this paper, we propose an accurate classification method for scoring brain amyloid plaque load (BAPL) based on deep convolutional neural networks. A joint discriminative loss function was formulated by adding a discriminative intra-loss function to the conventional (cross-entropy) loss function. The performance of the proposed joint loss function was compared with that of the conventional loss function in three state-of-the-art deep neural network architectures. The intra-loss function significantly improved the BAPL classification performance. In addition, we showed that the mix-up data augmentation method, originally proposed for natural image classification, was also useful for medical image classification.Appl. Sci. 2020, 10, 965 2 of 13 fluid) Aβ/tau or amyloid PET [4], but clinical trials of Aβ-targeting drugs have been unsuccessful in clinical trials. This failure might be attributable to the late application of the treatment, highlighting the need for early treatment after early diagnosis [5]. Amyloid markers are the fastest appearing biomarkers early in the disease [6][7][8].In recent years, AD in MRI or PET brain images has been identified by various machine learning methods including deep learning [3,[9][10][11][12][13]. Zhang [9] classified AD and normal control images by a combined kernel technique with a support vector machine. Sarraf [10] developed the program DeepAD for AD diagnosis which analyzes sMRI and fMRI brain scans separately on the slice and subject levels by two convolutional neural networks (CNNs) (LeNet and GoogleNet). Farooq [11] classified AD in MRI scans by a deep CNN-based multi-class classification algorithm based on GoogleNet and ResNet. In a related study of PET images, Kang [12] proposed a classification method for scoring brain amyloid plaque load (BAPL) in FBB PET images which is based on a deep CNN developed by the Visual Geometry Group. Liu [13] combined a CNN with recurrent neural networks (RNN) for classifying amyloids in fluorodeoxyglucose (FDG) PET images. Although Kang's method satisfactorily classified images as positive or negative for amyloids, it could not accurately identify BAPL2 in a ternary classification, because BAPL2 is a weaker amyloid load than BAPL1 [12]. In some cases, the interpreter cannot easily distinguish between BAPL1 and BAPL2. Liu [13] studied FDG PET images, which are more appropriate for identifying progression markers than diagnostic markers, and clinically classified them as normal, mild cognitive impairment (MCI), or AD. Choi [7] combined florbetapir (not FBB) amyloid PET and FDG PET image...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.