2022
DOI: 10.48550/arxiv.2208.00780
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Visual correspondence-based explanations improve AI robustness and human-AI team accuracy

Abstract: Explaining artificial intelligence (AI) predictions is increasingly important and even imperative in many high-stakes applications where humans are the ultimate decision makers. In this work, we propose two novel architectures of self-interpretable image classifiers that first explain, and then predict (as opposed to post-hoc explanations) by harnessing the visual correspondences between a query image and exemplars. Our models consistently improve (by 1 to 4 points) on out-of-distribution (OOD) datasets while … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 47 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?