2020
DOI: 10.48550/arxiv.2010.12265
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Model Interpretability through the Lens of Computational Complexity

Pablo Barceló,
Mikaël Monet,
Jorge Pérez
et al.

Abstract: In spite of several claims stating that some models are more interpretable than others -e.g., "linear models are more interpretable than deep neural networks" -we still lack a principled notion of interpretability to formally compare among different classes of models. We make a step towards such a notion by studying whether folklore interpretability claims have a correlate in terms of computational complexity theory. We focus on local post-hoc explainability queries that, intuitively, attempt to answer why ind… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 25 publications
0
3
0
Order By: Relevance
“…The importance of evaluating explanation methods has been discussed in the literature [21,50]. There are various attempts to measure different aspects of an explanation: usefulness to humans [18,25,33]; complexity [32]; difficulty of answering queries [7]; and robustness [3]. In this paper, we measure faithfulness to the model.…”
Section: Related Workmentioning
confidence: 99%
“…The importance of evaluating explanation methods has been discussed in the literature [21,50]. There are various attempts to measure different aspects of an explanation: usefulness to humans [18,25,33]; complexity [32]; difficulty of answering queries [7]; and robustness [3]. In this paper, we measure faithfulness to the model.…”
Section: Related Workmentioning
confidence: 99%
“…The idea of using complexity as a proxy for interpretability was also proposed in [3], where the authors stated that the computational complexity of a model can be used as a metric of interpretability as it directly resembles the number of operations that must be interpreted by humans.…”
Section: Related Workmentioning
confidence: 99%
“…As the value of the M ′ moves away from 0, the interpretability of the system decreases. The idea of using complexity as a proxy for interpretability was also proposed in [4], where the authors stated that the computational complexity of a model can be used as a metric of interpretability as it directly resembles the number of operations that must be interpreted by humans.…”
Section: Related Workmentioning
confidence: 99%