Alzheimer's disease (AD) is a chronic brain disorder and is the most common cause of dementia. Patients suffering from AD experience memory loss, confusion, and other cognitive and behavioral complications. As the disease progresses, these symptoms become severe enough to interfere with the patient's daily life. Since AD is an irreversible disease and existing treatments can only slow down its progress, early diagnosis of AD is a key moment in fighting this disease. In this article, we propose a novel approach for diagnosing AD via deep neural networks from magnetic resonance imaging images. Additionally, we propose three new propagation rules for the layer‐wise relevance propagation (LRP) method, which is a method used for visualizing evidence in deep neural networks to obtain a better understanding of the network's behavior. We also propose various rule configurations for the LRP to achieve better interpretability of the network. Our proposed classification method achieves a 92% accuracy when classifying AD versus healthy controls, which is comparable to state‐of‐the‐art approaches and could potentially aid doctors in AD diagnosis and reduce the occurrence of human error. Our proposed visualization approaches also show improvements in evidence visualization, which helps the spread of computer‐aided diagnosis in the medical domain by eliminating the “black‐box” nature of the neural networks.
Multiple research studies have recently demonstrated deep networks can generate realistic‐looking textures and stylised images from a single texture example. However, they suffer from some drawbacks. Generative adversarial networks are in general difficult to train. Multiple feature variations, encoded in their latent representation, require a priori information to generate images with specific features. The auto‐encoders are prone to generate a blurry output. One of the main reasons is the inability to parameterise complex distributions. The authors present a novel texture generative model architecture extending the variational auto‐encoder approach. It gradually increases the accuracy of details in the reconstructed images. Thanks to the proposed architecture, the model is able to learn a higher level of details resulting from the partial disentanglement of latent variables. The generative model is also capable of synthesising complex real‐world textures. The model consists of multiple separate latent layers responsible for learning the gradual levels of texture details. Separate training of latent representations increases the stability of the learning process and provides partial disentanglement of latent variables. The experiments with proposed architecture demonstrate the potential of variational auto‐encoders in the domain of texture synthesis and also tend to yield sharper reconstruction as well as synthesised texture images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.