Early detection of Alzheimer's disease (AD) is significant for identifying of better treatment plans for the patients as the AD is not curable. On the other hand, lack of interpretability for the high performing prediction models might prevent incorporation of such models in clinical usage for AD detection. Accordingly, it is important to develop highly interpretable models which can create trust towards the prediction models by showing the factors that contribute to the models' decisions. In this paper, we use ProtoPNet architecture in combination with popular pretrained deep learning models to add interpretability to the AD classifications over MRI scans from ADNI and OASIS datasets. We find that the ProtoPNet model with DenseNet121 architecture can reach 90 percent accuracy while providing explanatory illustrations of the model's reasonings for the generated predictions. We also note that, in most cases, the performances of the ProtoPNet models are slightly inferior to their black-box counterparts, however, their ability to provide reasoning and transparency in the prediction generation process can contribute to higher adoption of the prediction models in clinical practice.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.