The selection of coarse-grained (CG) mapping operators is a critical step for CG molecular dynamics (MD) simulation. It is still an open question about what is optimal for this choice...
In this work, we investigate the question: do code-generating large language models know chemistry? Our results indicate, mostly yes. To evaluate this, we introduce an expandable framework for evaluating chemistry...
Chemists can be skeptical in using deep learning (DL)
in decision
making, due to the lack of interpretability in “black-box”
models. Explainable artificial intelligence (XAI) is a branch of artificial
intelligence (AI) which addresses this drawback by providing tools
to interpret DL models and their predictions. We review the principles
of XAI in the domain of chemistry and emerging methods for creating
and evaluating explanations. Then, we focus on methods developed by
our group and their applications in predicting solubility, blood–brain
barrier permeability, and the scent of molecules. We show that XAI
methods like chemical counterfactuals and descriptor explanations
can explain DL predictions while giving insight into structure–property
relationships. Finally, we discuss how a two-step process of developing
a black-box model and explaining predictions can uncover structure–property
relationships.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.