2021
DOI: 10.1021/acs.jcim.0c01344
|View full text |Cite
|
Sign up to set email alerts
|

Coloring Molecules with Explainable Artificial Intelligence for Preclinical Relevance Assessment

Abstract: Graph neural networks are able to solve certain drug discovery tasks such as molecular property prediction and \textit{de novo} molecule generation. However, these models are considered 'black-box' and 'hard-to-debug'. This study aimed to improve modeling transparency for rational molecular design by applying the integrated gradients explainable artificial intelligence (XAI) approach for graph neural network models. Models were trained for predicting plasma protein binding, cardiac potassium channel inhibition… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
67
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
2
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 68 publications
(67 citation statements)
references
References 92 publications
0
67
0
Order By: Relevance
“…Such models are often considered as black boxes and understanding the mechanism in the hidden layers is a challenge. However, research undertaken in this direction aims at deciphering what the algorithm has learned [ 101 , 174 ]. This may also be important in detecting bias in the data [ 121 ].…”
Section: Conclusion and Discussionmentioning
confidence: 99%
“…Such models are often considered as black boxes and understanding the mechanism in the hidden layers is a challenge. However, research undertaken in this direction aims at deciphering what the algorithm has learned [ 101 , 174 ]. This may also be important in detecting bias in the data [ 121 ].…”
Section: Conclusion and Discussionmentioning
confidence: 99%
“…This allows to visualize the atoms that are considered "mostly responsible" for modulating the activity. Some approaches exploit integrated gradients (IG) [Sundararajan et al, 2017] to generate input-attributions on the atoms [Jiménez-Luna et al, 2021]. Although XAI methods do a good job in attributing specific atoms, it is difficult for them to refer to functional groups.…”
Section: Related Workmentioning
confidence: 99%
“…There are four major approaches for explaining a prediction from a black-box model: 17 identifying which features contribute the most, [18][19][20][21][22] identifying which training data contributes the most, 23 tting a locally interpretable model around the prediction, 24 and providing contrastive or counterfactual points. 25 Feature importance analysis provides per-feature weights that identify how each feature contributed to the nal prediction.…”
Section: Introductionmentioning
confidence: 99%