2022
DOI: 10.1007/s10462-022-10377-0
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable decision trees through MaxSAT

Abstract: We present an approach to improve the accuracy-interpretability trade-off of Machine Learning (ML) Decision Trees (DTs). In particular, we apply Maximum Satisfiability technology to compute Minimum Pure DTs (MPDTs). We improve the runtime of previous approaches and, show that these MPDTs can outperform the accuracy of DTs generated with the ML framework sklearn.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 20 publications
0
2
0
Order By: Relevance
“…The former refers to the design of AI models with their interpretability in mind. For example, transparent models such as decision trees (Alòs et al, 2023; Florez‐Lopez & Ramon‐Jeronimo, 2015; Niu, Zhang, et al, 2020), generalized additive models (Chen et al, 2018), rule‐based models (Yang, Lim, et al, 2023), and causality (Shin, 2021), among others. Yang, Lim, et al (2023) proposed an integrated learning approach based on evidential inference rules that provide the contribution of attributes and activation rules for a good understanding of model predictions.…”
Section: Review Of the Literaturementioning
confidence: 99%
“…The former refers to the design of AI models with their interpretability in mind. For example, transparent models such as decision trees (Alòs et al, 2023; Florez‐Lopez & Ramon‐Jeronimo, 2015; Niu, Zhang, et al, 2020), generalized additive models (Chen et al, 2018), rule‐based models (Yang, Lim, et al, 2023), and causality (Shin, 2021), among others. Yang, Lim, et al (2023) proposed an integrated learning approach based on evidential inference rules that provide the contribution of attributes and activation rules for a good understanding of model predictions.…”
Section: Review Of the Literaturementioning
confidence: 99%
“…In [30], the authors presented a hybrid classifier system that used clustering to improve the accuracy of the decision tree. In [31], the authors provided a method for enhancing machine learning decision trees' accuracy-interpretability trade-off. In [32], the authors used confusion matrix to optimize decision tree.…”
Section: Data Classificationmentioning
confidence: 99%