Proceedings of the Canadian Conference on Artificial Intelligence 2021
DOI: 10.21428/594757db.fb59ce6c
|View full text |Cite
|
Sign up to set email alerts
|

Using ProtoPNet for Interpretable Alzheimer’s Disease Classification

Abstract: Early detection of Alzheimer's disease (AD) is significant for identifying of better treatment plans for the patients as the AD is not curable. On the other hand, lack of interpretability for the high performing prediction models might prevent incorporation of such models in clinical usage for AD detection. Accordingly, it is important to develop highly interpretable models which can create trust towards the prediction models by showing the factors that contribute to the models' decisions. In this paper, we us… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
11
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(12 citation statements)
references
References 18 publications
1
11
0
Order By: Relevance
“…The authors in [ 142 ] proposed a prototypical part network (ProtoPNet) that can highlight the image regions used for decision-making and can explain the reasoning process for the classification target by comparing the representative patches of the test image with the prototypes learned from a large number of data. To date, several studies have tested the explanation model proposed in [ 142 ] that was able to highlight image regions used for decision making in medical imaging fields, such as for mass lesion classification [ 143 ], lung disease detection [ 144 , 145 ], and Alzheimer’s diseases classification [ 146 ]. Future research in the brain tumor classification field will need to test how explainable models influence the attitudes and decision-making processes of radiologists or other clinicians.…”
Section: Discussionmentioning
confidence: 99%
“…The authors in [ 142 ] proposed a prototypical part network (ProtoPNet) that can highlight the image regions used for decision-making and can explain the reasoning process for the classification target by comparing the representative patches of the test image with the prototypes learned from a large number of data. To date, several studies have tested the explanation model proposed in [ 142 ] that was able to highlight image regions used for decision making in medical imaging fields, such as for mass lesion classification [ 143 ], lung disease detection [ 144 , 145 ], and Alzheimer’s diseases classification [ 146 ]. Future research in the brain tumor classification field will need to test how explainable models influence the attitudes and decision-making processes of radiologists or other clinicians.…”
Section: Discussionmentioning
confidence: 99%
“…Prototypical Part Network (ProtoPNet) [27] is a deep neural network that is interpretable and performs classification by comparing the features extracted from the input image against class discriminative prototypes. ProtoPNet was utilised for Alzheimer's disease classification with DenseNet-121 as a feature extractor and the analysis showed that the ProtoPNet provided reasoning for its prediction that can facilitate its adoption in clinical practice [111]. Anatomical priors and other domain specific information related to the medical image analysis task can be incorporated in the model to make its predictions interpretable.…”
Section: Explainability -For Enhanced Understanding Of Ai In Medical ...mentioning
confidence: 99%
“…The interpretability of ProtoPNet does not come at a cost of performance when compared to black-box DL models. Mohammadjafari et al [94] utilized ProtoPNet with DenseNet121 architecture for Alzheimer's disease classification. Barnett et al [8] used a modified version of ProtoPNet to utilize fine-grained expert annotations for mass margin classification and malignancy prediction.…”
Section: Case-based Modelsmentioning
confidence: 99%