In this work we propose three explainable deep learning architectures to automatically detect patients with Alzheimer's disease based on their language abilities.
The architectures use: (1) only the part-of-speech features; (2) only language embedding features and (3) both of these feature classes via a unified architecture.
We use self-attention mechanisms and interpretable 1-dimensional Convolutional Neural Network (CNN) to generate two types of explanations of the model's action: intra-class explanation and inter-class explanation. The inter-class explanation captures the relative importance of each of the different features in that class, while the inter-class explanation captures the relative importance between the classes.
Note that although we have considered two classes of features in this paper, the architecture is easily expandable to more classes because of its modularity. Extensive experimentations and comparison with several recent models show that our method outperforms these methods with an accuracy of 92.2% and F1 score of 0.952 on the DementiaBank dataset while being able to generate explanations. We show by examples, how to generate these explanations using attention values.