2021
DOI: 10.48550/arxiv.2111.15000
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deformable ProtoPNet: An Interpretable Image Classifier Using Deformable Prototypes

Abstract: Machine learning has been widely adopted in many domains, including high-stakes applications such as healthcare, finance, and criminal justice. To address concerns of fairness, accountability and transparency, predictions made by machine learning models in these critical domains must be interpretable. One line of work approaches this challenge by integrating the power of deep neural networks and the interpretability of case-based reasoning to produce accurate yet interpretable image classification models. Thes… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(7 citation statements)
references
References 25 publications
0
7
0
Order By: Relevance
“…Figure 7 illustrates the general pipeline for deriving a class prediction from the similarity score between different parts of the input image and a set of learned prototypes. Based on ProtoPNet, Donnelly et al [28] introduced the Deformable ProtoPNet. This prototypical case-based interpretable neural network provided spatially flexible deformable prototypes, i.e., prototypes that can change their relative position to detect semantically similar parts of an input image.…”
Section: Propotypesmentioning
confidence: 99%
“…Figure 7 illustrates the general pipeline for deriving a class prediction from the similarity score between different parts of the input image and a set of learned prototypes. Based on ProtoPNet, Donnelly et al [28] introduced the Deformable ProtoPNet. This prototypical case-based interpretable neural network provided spatially flexible deformable prototypes, i.e., prototypes that can change their relative position to detect semantically similar parts of an input image.…”
Section: Propotypesmentioning
confidence: 99%
“…Recent work found that re-ranking kNN's shortlisted candidates using the patch-wise similarity between the query and training set examples can further improve classification accuracy on OOD data for some image matching tasks [29,79,75] such as face identification [29]. Furthermore, patch-level comparison is also useful in prototype-based bird classifiers [17,22]. Inspired by these prior successes and the fact that EMD-Corr and CHM-Corr base the patch-wise similarity of two images on only 5 pairs of patches instead of all 49×49 = 2,401 pairs as in [29,79,75], here we test whether our two proposed re-rankers are able to improve the test-set accuracy and robustness over kNN.…”
Section: Visual Correspondence-based Explanations Improve Knn Robustn...mentioning
confidence: 99%
“…Our EMD-Corr and CHM-Corr present a novel combination of heatmap-based and prototype-based XAI approaches. None of the prior prototype-based XAI methods that operate at the patch level [17,53,79,22] (see Table 3 in [22]) were tested on humans yet. Also, in preliminary tests, we find their explanation formats too dense (i.e., showing over 10 prototypes [17], 9 correspondence pairs per image [22], or an entire prototype tree to humans [52,53]) to be useful for lay users.…”
Section: Related Workmentioning
confidence: 99%
“…Additionally, we further design a measurement to quantitatively evaluate the visual-based interpretation. [35,13] extend ProtoPNet in various directions. For instance, ProtoTree [27] combines prototype learning with decision trees, resulting in an interpretable decision path consisting of prototypes.…”
Section: Introductionmentioning
confidence: 99%
“…We notice two new variants of ProtoPNet, ProtoPool[35] and Deformable ProtoPNet[13]. We do not include these two methods for comparison as their papers are not formally published and their codes are still unavailable.…”
mentioning
confidence: 99%