2019
DOI: 10.1016/j.eswa.2019.07.001
|View full text |Cite
|
Sign up to set email alerts
|

Fostering interpretability of data mining models through data perturbation

Abstract: With the widespread adoption of data mining models to solve real-world problems, the scientific community is facing the need of increasing their interpretability and comprehensibility. This is especially relevant in the case of black box models, in which inputs and outputs are usually connected by highly complex and nonlinear functions; in applications requiring an interaction between the user and the model; and when the machine's solution disagrees with the human experience. In this contribution we present a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 12 publications
(2 citation statements)
references
References 52 publications
0
2
0
Order By: Relevance
“… 130 However, interpretability or explainability of the results of such approaches hinder their use in practice. 131 It should be noted that CDSSs still remain not as highly adopted by users, perhaps partially due to general lack of engagement from clinicians, physicians, or health specialists. 132 …”
Section: Concepts From Systems Medicine Modeling and Data Sciencementioning
confidence: 99%
“… 130 However, interpretability or explainability of the results of such approaches hinder their use in practice. 131 It should be noted that CDSSs still remain not as highly adopted by users, perhaps partially due to general lack of engagement from clinicians, physicians, or health specialists. 132 …”
Section: Concepts From Systems Medicine Modeling and Data Sciencementioning
confidence: 99%
“…One of the main criticisms against DL is a general lack of interpretability due to its black-box nature [21,160]. Nevertheless, progress has been made in improving the interpretability of DL in healthcare [115,161,162]. For example, by highlighting patient trajectories that maximally activate CNN predictions, Suresh et al [126] improved the interpretability when applying the CNN to predict clinical intervention…”
Section: Challenges and Future Trendsmentioning
confidence: 99%