2020
DOI: 10.3389/fpsyt.2020.551299
|View full text |Cite
|
Sign up to set email alerts
|

Illuminating the Black Box: Interpreting Deep Neural Network Models for Psychiatric Research

Abstract: Psychiatric research is often confronted with complex abstractions and dynamics that are not readily accessible or well-defined to our perception and measurements, making data-driven methods an appealing approach. Deep neural networks (DNNs) are capable of automatically learning abstractions in the data that can be entirely novel and have demonstrated superior performance over classical machine learning models across a range of tasks and, therefore, serve as a promising tool for making new discoveries in psych… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
40
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 64 publications
(41 citation statements)
references
References 52 publications
0
40
0
1
Order By: Relevance
“…XAI has found emergent applications in medicine, finance, economy, security, and defense [328], [329]. In psychiatry, the mission of XAI is to help clarify the link between neural circuits to behavior, and to improve our understanding of therapeutic strategies to enhance cognitive, affective, and social functions [330], [331]. Notably, XAI distinguishes the standard AI in two important ways: (i) promote transparency, interpretability, and generalizability; (ii) transform classical "black box" ML models into "glass box" models, while achieving similar or improved performance.…”
Section: Explainable Ai and Causality Testing In Psychiatrymentioning
confidence: 99%
See 1 more Smart Citation
“…XAI has found emergent applications in medicine, finance, economy, security, and defense [328], [329]. In psychiatry, the mission of XAI is to help clarify the link between neural circuits to behavior, and to improve our understanding of therapeutic strategies to enhance cognitive, affective, and social functions [330], [331]. Notably, XAI distinguishes the standard AI in two important ways: (i) promote transparency, interpretability, and generalizability; (ii) transform classical "black box" ML models into "glass box" models, while achieving similar or improved performance.…”
Section: Explainable Ai and Causality Testing In Psychiatrymentioning
confidence: 99%
“…This is also an important dimension to improve "precision" in mental health. The choice of the interpretation method [331], like model-specific (such as analyzing attention weights of a transformer), or model agnostic (such as local interpretable model-agnostic explanations (LIME)), is very specific to the nature of the problem. While various interpretation methods can be used to learn about model functioning, it is important to note that the interpretation results can only be trusted as long as the challenges of generalizability and data quality are addressed.…”
Section: Challenge and Opportunitiesmentioning
confidence: 99%
“…Furthermore, network architecture designs are largely selected based on experimental results alone and are not intuitively explainable to medical practitioners. For example, deep neural networks (DNNs), which are widely used in biomedical machine learning applications, can become a "black box," which is difficult for practitioners to understand intuitively as layers are added [26]. The potential for real-world adoption suffers as a result, as a lack of understandability erodes trust and the likelihood of adoption for machine learning tools [27].…”
Section: Interpretability Of Deep Learning For Cancer Prognosis Predictionmentioning
confidence: 99%
“…The DL model achieved ROC AUC figures of 0.978, 0.956 and 0.943, respectively. Topics extracted from health notes identified possible predictive topics such as cognitive function and laboratory testing in the absence of interpretability of DL models, which are considered 'black box' because of a lack of transparency and interpretability concerning how input data are transformed to output (15). More interpretable ML decision trees (gradient-boosted trees) used electronic health records for chemotherapy patients, including palliative care patients, to predict 30-day mortality after start of a new regimen.…”
Section: Introductionmentioning
confidence: 99%