Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 2016
DOI: 10.1145/2939672.2939874
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable Decision Sets

Abstract: One of the most important obstacles to deploying predictive models is the fact that humans do not understand and trust them. Knowing which variables are important in a model’s prediction and how they are combined can be very powerful in helping people understand and trust automatic decision making systems. Here we propose interpretable decision sets, a framework for building predictive models that are highly accurate, yet also highly interpretable. Decision sets are sets of independent if-then rules. Because e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
107
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
9
1

Relationship

1
9

Authors

Journals

citations
Cited by 457 publications
(107 citation statements)
references
References 53 publications
0
107
0
Order By: Relevance
“…Finally, it is important to mention the fact that many advanced ML approaches work as "black boxes," hiding the importance and the correlation of a set of features from the outcome (i.e., its biological interpretation), thus hindering the deployment of predictive models because ultimately, humans do not understand-nor trust-them (Camacho, Collins, Powers, Costello, & Collins, 2018). To overcome this issue, several efforts are being pursued in providing interpretable ML approaches able to balance interpretability, accuracy, and computational viability (Lakkaraju, Bach, & Leskovec, 2016;M. K. Yu et al, 2018).…”
Section: Potential Impact Of New Molecular and Computational Methodmentioning
confidence: 99%
“…Finally, it is important to mention the fact that many advanced ML approaches work as "black boxes," hiding the importance and the correlation of a set of features from the outcome (i.e., its biological interpretation), thus hindering the deployment of predictive models because ultimately, humans do not understand-nor trust-them (Camacho, Collins, Powers, Costello, & Collins, 2018). To overcome this issue, several efforts are being pursued in providing interpretable ML approaches able to balance interpretability, accuracy, and computational viability (Lakkaraju, Bach, & Leskovec, 2016;M. K. Yu et al, 2018).…”
Section: Potential Impact Of New Molecular and Computational Methodmentioning
confidence: 99%
“…The DT developed in 2.7.2 (including the boosting conducted in Tests 10, 12, 14, and 16) was transformed into a simpler set of “if-then” rules in C5.0 by creating “rulesets” in the algorithm. The ruleset generated from the DT has fewer rules than the number of leaves in the decision tree, thus it is a more compact and simpler representation [ 88 ]. Since each conditional logic rule describes a specific context associated with a class, it is relatively easy to examine, validate, and interpret the ruleset.…”
Section: Methodsmentioning
confidence: 99%
“…While the accuracy of such tools has risen for many kinds of moderation tasks, the tools often cannot explain their decisions, which makes mixed-initiative human-machine solutions challenging to design. Human-understandable ML is an active area of research [64]. Yet, as we will see, Automod does not rely on ML techniques but it rather uses simple rules and regularexpression matching, which can be understood by technically savvy human moderators.…”
Section: Introductionmentioning
confidence: 99%