2020
DOI: 10.48550/arxiv.2004.01570
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A New Method to Compare the Interpretability of Rule-based Algorithms

Abstract: Interpretability is becoming increasingly important in predictive model analysis. Unfortunately, as mentioned by many authors, there is still no consensus on that idea. The aim of this article is to propose a rigorous mathematical definition of the concept of interpretability, allowing fair comparisons between any rule-based algorithms. This definition is built from three notions, each of which being quantitatively measured by a simple formula: predictivity, stability and simplicity. While predictivity has bee… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 13 publications
0
1
0
Order By: Relevance
“…In this paper, we measure faithfulness to the model. Earlier work has looked at global measures of this type [48] and measures that are specialized to neural networks [32], feature importance [4,9,43,45], rule-based explanations [24], surrogate explanation [35], or highlighted text [10,46,49].…”
Section: Related Workmentioning
confidence: 99%
“…In this paper, we measure faithfulness to the model. Earlier work has looked at global measures of this type [48] and measures that are specialized to neural networks [32], feature importance [4,9,43,45], rule-based explanations [24], surrogate explanation [35], or highlighted text [10,46,49].…”
Section: Related Workmentioning
confidence: 99%