2022
DOI: 10.1038/s41524-022-00863-y
|View full text |Cite
|
Sign up to set email alerts
|

Equivariant graph neural networks for fast electron density estimation of molecules, liquids, and solids

Abstract: Electron density $$\rho (\overrightarrow{{{{\bf{r}}}}})$$ ρ ( r → ) is the fundamental variable in the calculation of ground state energy with density… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
25
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 27 publications
(26 citation statements)
references
References 66 publications
1
25
0
Order By: Relevance
“…In fact, as shown in the lower panel of the Figure, absolute density errors integrated on a real-space grid are distributed around a mean absolute error of 0.45%, which is about 25% larger than those observed for liquid water. When compared with the results of ref for the very same QM9 test molecules, we find that our errors are about 1.5 times larger; this is still remarkable when considering that we used only 6% of the training set and that we predict the all-electron density, whereas ref uses almost the entire QM9 data set and uses the pseudovalence density as the target. In Figure we also report learning curves and error histogram for the derived total energy predictions, showing a mean absolute error that is brought down to 1.57 kcal/mol, with 65% of our predictions falling within chemical accuracy.…”
Section: Resultsmentioning
confidence: 74%
See 1 more Smart Citation
“…In fact, as shown in the lower panel of the Figure, absolute density errors integrated on a real-space grid are distributed around a mean absolute error of 0.45%, which is about 25% larger than those observed for liquid water. When compared with the results of ref for the very same QM9 test molecules, we find that our errors are about 1.5 times larger; this is still remarkable when considering that we used only 6% of the training set and that we predict the all-electron density, whereas ref uses almost the entire QM9 data set and uses the pseudovalence density as the target. In Figure we also report learning curves and error histogram for the derived total energy predictions, showing a mean absolute error that is brought down to 1.57 kcal/mol, with 65% of our predictions falling within chemical accuracy.…”
Section: Resultsmentioning
confidence: 74%
“…In testing the accuracy of our predictions we rely on the same partitioning of the data set reported in ref ; i.e., using the same random selection of 10 000 structures as a test set, while using the remaining configurations for training the model. In order to directly compare our results with what is reported in ref , we also make use of the same local environment definition by selecting a radial cutoff around the atoms of r cut = 4 Å. For all learning exercises, we consider active set sizes that span M = {2500, 5000, 10000}.…”
Section: Resultsmentioning
confidence: 99%
“…ερ for each molecule in the test sets is computed using a 3D cubic grid of voxel spacing (0.2, 0.2, 0.2) Bohr for the BFDb-SSI test set and voxel spacing (1.0, 1.0, 1.0) Bohr for the QM9 test set, both with cutoffs at ρ( r) = 10 −5 a −3 0 . We note that two baseline methods used slightly different normalization conventions when computing the dataset-averaged L 1 density errors ερ: 1) computing ερ for each molecule and normalizing over the number of molecules in the test set (62) or 2) normalizing over the total number of electrons in the test set (61). We found that the average ερ computed using normalization 2 is higher than normalization 1 by around 5% for our results.…”
Section: Each Neuron In H Tmentioning
confidence: 74%
“…(Materials and M ethods); specifically, OrbNet-Equi achieves an average ε ρ of 0.191 ± 0.003% on BfDB-SSI using 2,000 training samples compared with 0.29% of Symmetry-Adapted Gaussian Process Regression (61) and an average ε ρ of 0.206 ± 0.001% on QM9 using 123,835 training samples as compared with 0.28 to 0.36% of DeepDFT (62). Fig.…”
Section: Resultsmentioning
confidence: 95%
See 1 more Smart Citation