2020
DOI: 10.1063/5.0021955
|View full text |Cite
|
Sign up to set email alerts
|

OrbNet: Deep learning for quantum chemistry using symmetry-adapted atomic-orbital features

Abstract: We introduce a machine learning method in which energy solutions from the Schrodinger equation are predicted using symmetry adapted atomic orbitals features and a graph neural-network architecture. ORBNET is shown to outperform existing methods in terms of learning efficiency and transferability for the prediction of density functional theory results while employing low-cost features that are obtained from semi-empirical electronic structure calculations. For applications to datasets of drug-like molecules, in… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
250
1
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 248 publications
(252 citation statements)
references
References 65 publications
0
250
1
1
Order By: Relevance
“…Given a difference in energy between these states from an inexpensive reference method, ∆E ref , we train functionals to minimize the mean squared deviation between the corrected energy difference, ∆E ref + ∆E target , and a target energy difference, ∆E target (in this work, singlet-triplet energy splittings from MRCISD-F12+Q); this training scheme is outlined in Figure1. Although this centering of the loss function solely on relative energies stands in contrast to previous work in NeuralXC,21 DeepKS,22 OrbNet,23 and KDFA,29 it has three advantages: (i) it allows benchmark results to be obtained from a variety of different sources (including experiment, which almost always yields relative energies); (ii) relative energies are the quantities of most interest to chemists, since bond energies, energies of reaction, and barrier heights are all relative energies; and (iii) theoretical data used for training is almost always more accurate for relative energies than for absolute energies. For optimization of parameters and hyperparameters, the 360 carbenes were split into a training set of 289 carbenes, a validation set of 37 carbenes, and a test set of 36 carbenes.…”
mentioning
confidence: 57%
See 1 more Smart Citation
“…Given a difference in energy between these states from an inexpensive reference method, ∆E ref , we train functionals to minimize the mean squared deviation between the corrected energy difference, ∆E ref + ∆E target , and a target energy difference, ∆E target (in this work, singlet-triplet energy splittings from MRCISD-F12+Q); this training scheme is outlined in Figure1. Although this centering of the loss function solely on relative energies stands in contrast to previous work in NeuralXC,21 DeepKS,22 OrbNet,23 and KDFA,29 it has three advantages: (i) it allows benchmark results to be obtained from a variety of different sources (including experiment, which almost always yields relative energies); (ii) relative energies are the quantities of most interest to chemists, since bond energies, energies of reaction, and barrier heights are all relative energies; and (iii) theoretical data used for training is almost always more accurate for relative energies than for absolute energies. For optimization of parameters and hyperparameters, the 360 carbenes were split into a training set of 289 carbenes, a validation set of 37 carbenes, and a test set of 36 carbenes.…”
mentioning
confidence: 57%
“…Recently, neural networks have been used to develop density functionals for KS-DFT with no reference to the standard analytical forms generally utilized in functional development. [21][22][23] Inspired by this work, we extend this idea to nonclassical functionals in order to develop functionals specifically for use with MC-NCFT without reference to any functionals in KS-DFT.…”
Section: Introductionmentioning
confidence: 99%
“…rotations and translations) directly embedded into their design. These architectures have already been applied to several molecular tasks, such as the prediction of electronic properties of molecules [88]. Research in this direction is expected to intensify in the near future, opening up fresh modeling opportunities.…”
Section: Article Highlightsmentioning
confidence: 99%
“…However, the steep computational requirements of highly accurate methods such as CCSD(T) 19 and Gaussian-4, 20 preclude their use on a routine basis. A variety of noteworthy graph-based architectures (viz., SchNet 21 , PhysNet 22 , DimeNet 23 , DeepMoleNet 24 , OrbNet 25 ) have been proposed for the prediction of DFT level (B3LYP/6-31G(2df,p)) energies on the QM9 dataset. 26,27 In this work, however, we aim to predict G4(MP2) level energies, a relatively cheaper alternative to the G4 method, which is typically accurate within 1.0 kcal mol −1 of the experimental value, and hence is a more valuable quantity to reproduce.…”
Section: Introductionmentioning
confidence: 99%