2023
DOI: 10.1093/bib/bbad236
|View full text |Cite
|
Sign up to set email alerts
|

Explainable AI for Bioinformatics: Methods, Tools and Applications

Abstract: Artificial intelligence (AI) systems utilizing deep neural networks and machine learning (ML) algorithms are widely used for solving critical problems in bioinformatics, biomedical informatics and precision medicine. However, complex ML models that are often perceived as opaque and black-box methods make it difficult to understand the reasoning behind their decisions. This lack of transparency can be a challenge for both end-users and decision-makers, as well as AI developers. In sensitive areas such as health… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
13
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 48 publications
(13 citation statements)
references
References 85 publications
0
13
0
Order By: Relevance
“…Dimensionality reduction methods are valuable for distilling this data into a more manageable form (Tapinos et al 2019;Paradis 2022). However, interpreting these methods' two-dimensional representations can be challenging due to unclear biological significance (Karim et al 2022). In our workflow, we have incorporated PHATE and t-SNE alongside a metric that computes the percentage of nearest neighbours sharing the same annotation (e.g.…”
Section: Discussionmentioning
confidence: 99%
“…Dimensionality reduction methods are valuable for distilling this data into a more manageable form (Tapinos et al 2019;Paradis 2022). However, interpreting these methods' two-dimensional representations can be challenging due to unclear biological significance (Karim et al 2022). In our workflow, we have incorporated PHATE and t-SNE alongside a metric that computes the percentage of nearest neighbours sharing the same annotation (e.g.…”
Section: Discussionmentioning
confidence: 99%
“…However, such spatial imaging data faces challenges of missing values and data noise, which can negatively affect downstream analysis such as spatial domain detection 28, 29 . Several deep learning models have been proposed to improve noisy transcriptomics data and perform data analysis 7 , but most of them are black-box approaches that lack transparency and interpretability 18, 30 . To address this challenge, we have proposed xSiGra, which not only accurately identifies spatial cell types and enhances gene expression profiles, but also offers quantitative insights about which cells and genes are important for the identification of spatial cell types, thus making it an interpretable model.…”
Section: Discussionmentioning
confidence: 99%
“…Although the above DL-based methods prove to identify spatial cells or domains with high accuracy, their intrinsic black-box nature inhibit the explainability, regarding what genes and cells are used by these methods to achieve accurate spatial identities 17 . Such explainability issue is common when applying advanced deep learning approaches 18 . Explaining the model decisions can aid to find any limitations and validate model functioning using known knowledge 19 .…”
Section: Introductionmentioning
confidence: 99%
“…Furthermore, gene expression signatures often face a “black box” problem: they frequently do not offer insight of what contributes to a positive or negative response and often do not have a biological or mechanistic link (i.e., lack of explainability). 29 This lack of transparency results in a lack of trust with the models, especially in preclinical safety organizations where explainability is often required. Finally, toxicogenomics, like most new technologies, faced a hype problem with numerous extravagant and ridiculous claims that negatively affected its ongoing adoption.…”
Section: Transcriptomic Approaches To Predict Hepatotoxicity and Carc...mentioning
confidence: 99%