2022
DOI: 10.1063/5.0072784
|View full text |Cite
|
Sign up to set email alerts
|

Equivariant representations for molecular Hamiltonians and N-center atomic-scale properties

Abstract: Symmetry considerations are at the core of the major frameworks used to provide an effective mathematical representation of atomic configurations that is then used in machine-learning models to predict the properties associated with each structure. In most cases, the models rely on a description of atom-centered environments and are suitable to learn atomic properties or global observables that can be decomposed into atomic contributions. Many quantities that are relevant for quantum mechanical calculations, h… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
43
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 40 publications
(43 citation statements)
references
References 54 publications
0
43
0
Order By: Relevance
“…Hedge and Bowen 32 employed Kernel ridge regression with a bispectrum representation 33 for an analytical representation of a minimal basis DFT Hamiltonian for bulk copper and diamond. Equivariant parameterisations for molecular systems along similar lines to what we describe here have been reported, learning either from the Hamiltonian 34 or from wavefunctions and electronic densities 35 . These works apply linear or nonlinear equivariant models, respectively, to the MD17 molecular dataset, both of which improve on the non-equivariant SchNOrb approach of ref.…”
Section: Introductionmentioning
confidence: 68%
“…Hedge and Bowen 32 employed Kernel ridge regression with a bispectrum representation 33 for an analytical representation of a minimal basis DFT Hamiltonian for bulk copper and diamond. Equivariant parameterisations for molecular systems along similar lines to what we describe here have been reported, learning either from the Hamiltonian 34 or from wavefunctions and electronic densities 35 . These works apply linear or nonlinear equivariant models, respectively, to the MD17 molecular dataset, both of which improve on the non-equivariant SchNOrb approach of ref.…”
Section: Introductionmentioning
confidence: 68%
“…While this is auspicious for future works, it is not guaranteed that this will be the case for different types of molecules in different solvation environments. To extend the transferability of this approach to other classes of molecules, given the nonlocal nature of electronic excitations in large molecules, it may turn out to be necessary to supplement our approach by including richer physically based descriptors, e.g., electronic orbitals, and to compute excitation energies as the eigenvalues of ML effective Hamiltonians. …”
Section: Resultsmentioning
confidence: 99%
“…Our approach currently requires training with data augmentation to implicitly learn sensitivity to rigid rotations of the system, which can become computationally costly for large systems and increasingly large atomic orbital basis sets. These data augmentations can be avoided by incorporating equivariance into the model design, which has previously been achieved for orbital prediction with the SE(3)-equivariant neural networks of PhisNet and the symmetry-adapted representations employed by Ceriotti and co-workers . Operating on optimized minimal basis sets also provides benefits in scaling to larger systems, as these representations reduce the effective dimensionality of the learning target while also achieving speedups when solving the eigenvalue problem for the predicted Hamiltonian matrix .…”
Section: Discussionmentioning
confidence: 99%
“…These data augmentations can be avoided by incorporating equivariance into the model design, which has previously been achieved for orbital prediction with the SE(3)equivariant neural networks of PhisNet 42 and the symmetryadapted representations employed by Ceriotti and co-workers. 55 Operating on optimized minimal basis sets also provides benefits in scaling to larger systems, as these representations reduce the effective dimensionality of the learning target while also achieving speedups when solving the eigenvalue problem for the predicted Hamiltonian matrix. 54 While incorporating more inductive biases into the Orbital Mixer model design could improve data efficiency and performance, the simple MLP-based construction of our architecture already positions Orbital Mixer as an effective rapid inference model with competitive prediction accuracy.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation