2020
DOI: 10.1016/j.ins.2020.05.126
|View full text |Cite
|
Sign up to set email alerts
|

LoRMIkA: Local rule-based model interpretability with k-optimal associations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 30 publications
(21 citation statements)
references
References 20 publications
0
21
0
Order By: Relevance
“…They also compute the number of counterfactuals and their best minimal distance to the factual explanation to assess the relevance of counterfactuals. Rajapaksha et al consider coverage (as an indicator of representativeness of a rule for a given dataset), confidence (i.e., the percentage of instances in the dataset which contain the consequent and antecedent together over the number of instances which only contain the antecedent), lift (i.e., an association between antecedent and consequent), leverage (i.e., the observed frequency between the antecedent and consequent), and the number of features in explanation for evaluating their framework against other rule-based methods [152]. Also, White and Garcez reintroduce fidelity to the underlying classifier on the basis of distance to the decision boundary [160].…”
Section: ) Evaluation Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…They also compute the number of counterfactuals and their best minimal distance to the factual explanation to assess the relevance of counterfactuals. Rajapaksha et al consider coverage (as an indicator of representativeness of a rule for a given dataset), confidence (i.e., the percentage of instances in the dataset which contain the consequent and antecedent together over the number of instances which only contain the antecedent), lift (i.e., an association between antecedent and consequent), leverage (i.e., the observed frequency between the antecedent and consequent), and the number of features in explanation for evaluating their framework against other rule-based methods [152]. Also, White and Garcez reintroduce fidelity to the underlying classifier on the basis of distance to the decision boundary [160].…”
Section: ) Evaluation Methodsmentioning
confidence: 99%
“…Laugel et al measure how justified counterfactuals are by averaging a binary score (one if the explanation is justified following the proposed definition, zero otherwise) over all the generated explanations [100], [144]. It is worth noting that the run-time of explanation generation algorithms is reported in addition to the evaluation metrics for several frameworks [132], [139], [146], [152], [156], [159].…”
Section: ) Evaluation Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…al introduced MUSE [10] which uses rules to explain model behaviors in user-defined subspaces. More recently, LoRMIkA [11] extracts k-optimal association rules to explain the model predictions for classification data sets.…”
Section: Surrogate Modelsmentioning
confidence: 99%