2022
DOI: 10.1007/s11023-021-09583-6
|View full text |Cite
|
Sign up to set email alerts
|

Scientific Exploration and Explainable Artificial Intelligence

Abstract: Models developed using machine learning are increasingly prevalent in scientific research. At the same time, these models are notoriously opaque. Explainable AI aims to mitigate the impact of opacity by rendering opaque models transparent. More than being just the solution to a problem, however, Explainable AI can also play an invaluable role in scientific exploration. This paper describes how post-hoc analytic techniques from Explainable AI can be used to refine target phenomena in medical science, to identif… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
17
0
3

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 35 publications
(20 citation statements)
references
References 42 publications
0
17
0
3
Order By: Relevance
“…Explainability is important when using DL models, as it can not only help with finding limitations in models and increase trust in model predictions, but can also aid in scientific developments. [29][30][31] Using Miller's definition, an explanation provides additional context or insight for why a model prediction is made. [32] A local explanation can be defined as one that explains the model prediction for a specific case, whereas global explanations provide a broader description of model behavior.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Explainability is important when using DL models, as it can not only help with finding limitations in models and increase trust in model predictions, but can also aid in scientific developments. [29][30][31] Using Miller's definition, an explanation provides additional context or insight for why a model prediction is made. [32] A local explanation can be defined as one that explains the model prediction for a specific case, whereas global explanations provide a broader description of model behavior.…”
Section: Introductionmentioning
confidence: 99%
“…[41] One type of explanation approach is to utilize a surrogate model that is more interpretable, such as a linear model, and fit it to the DL model. [31,34] Surrogate models are used in the Locally Interpretable Model Explanations (LIME) algorithm. LIME generates a sample space around a specific data point and uses the DL model to obtain predictions for each element in the space.…”
Section: Introductionmentioning
confidence: 99%
“…Similarly, Krishnan (2020) claims that interpretability can help justifying ML models, addressing biases and discrimination against certain groups of people (Angwin et al., 2016), and integrating human judgement and ML models. Zednik and Boelsen (2020) argue that interpretability, and more specifically XAI, can help determine what a ML model is a model of—in their example, a crucial question is whether a model only traces spurious correlations. According to them, interpretability can also render causal inference possible, and help to produce hypotheses that may facilitate our understanding of human cognition (see also Buckner, 2019 and Sullivan, 2022 forthcoming for this point; see Yoon et al., forthcoming, for the benefits of interpretability in medicine).…”
Section: Interpretability In Philosophymentioning
confidence: 99%
“…It is important to point out that LYNA's predictions, even when complemented by XAI models, should not be definitive: they just point to one type of evidence among others. There are several techniques that tools like LYNA can use (see for instance Zednik & Boelsen, 2022 for an overview), but the whole point is that XAI models can potentially play the role of facilitating the realization of the purpose of AI tools (e.g., assisting diagnosis) well beyond the mere effect (e.g. the classificatory output).…”
mentioning
confidence: 99%
“…There are other works that, to me, seem to go in the direction I have briefly sketched. For instance, Zednik and Boelsen (2022) show how XAI models can contribute to scientific exploration "by facilitating the task of refining target phenomena" (p 225) and by identifying "starting points for future inquiry" (p 228). This is to say that XAI tools can go well beyond not-well-defined explanatory tasks, and that they can actually facilitate the integration of AI tools in scientific research.…”
mentioning
confidence: 99%