2020
DOI: 10.1371/journal.pcbi.1007792
|View full text |Cite
|
Sign up to set email alerts
|

eXplainable Artificial Intelligence (XAI) for the identification of biologically relevant gene expression patterns in longitudinal human studies, insights from obesity research

Abstract: Until date, several machine learning approaches have been proposed for the dynamic modeling of temporal omics data. Although they have yielded impressive results in terms of model accuracy and predictive ability, most of these applications are based on "Black-box" algorithms and more interpretable models have been claimed by the research community. The recent eXplainable Artificial Intelligence (XAI) revolution offers a solution for this issue, were rule-based approaches are highly suitable for explanatory pur… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 55 publications
(26 citation statements)
references
References 65 publications
0
26
0
Order By: Relevance
“…In the work by Augusto et al, sequential rule mining was proposed as one of the solutions to understand the 'black-box' machine learning model. Their pipeline was able to find biologically relevant genes in six different datasets [157]. Other approaches include the calibration of machine learning models by making the predictions probabilistically interpretable [158].…”
Section: Discussionmentioning
confidence: 99%
“…In the work by Augusto et al, sequential rule mining was proposed as one of the solutions to understand the 'black-box' machine learning model. Their pipeline was able to find biologically relevant genes in six different datasets [157]. Other approaches include the calibration of machine learning models by making the predictions probabilistically interpretable [158].…”
Section: Discussionmentioning
confidence: 99%
“…GAP used in our study can enforce the feature maps to preserve spatial information relevant to the classes, so that they can be used to interpret the decision of the CNN models 8,28 . This method for identifying areas that are attributed to differential diagnosis using GAP with CAM leads toward the concept of eXplainable AI (XAI) 29,30 . XAI or responsible AI is an emerging paradigm to overcome the inherent "black box problem" brought by deep frameworks, wherein it is impossible for us to understand how decisions are furnished.…”
Section: Discussionmentioning
confidence: 99%
“…Despite the clear trade-off between accuracy and explainability, explainable models are needed to ensure safety of the patients and establish trust in the AI models. Recent developments in AI include recurrent neural network (RNN) variant models that can control data inputs at various stages and self-evaluate which timepoints and data inputs are most predictive of the outcome, visualizing techniques, association rule mining (using biologically-based relationships between data elements), and functional validation of the results (117,119).…”
Section: J O U R N a L P R E -P R O O Fmentioning
confidence: 99%