Chemists can be skeptical in using deep learning (DL)
in decision
making, due to the lack of interpretability in “black-box”
models. Explainable artificial intelligence (XAI) is a branch of artificial
intelligence (AI) which addresses this drawback by providing tools
to interpret DL models and their predictions. We review the principles
of XAI in the domain of chemistry and emerging methods for creating
and evaluating explanations. Then, we focus on methods developed by
our group and their applications in predicting solubility, blood–brain
barrier permeability, and the scent of molecules. We show that XAI
methods like chemical counterfactuals and descriptor explanations
can explain DL predictions while giving insight into structure–property
relationships. Finally, we discuss how a two-step process of developing
a black-box model and explaining predictions can uncover structure–property
relationships.