2021
DOI: 10.1038/s41467-021-25342-8
|View full text |Cite
|
Sign up to set email alerts
|

Differentiable sampling of molecular geometries with uncertainty-based adversarial attacks

Abstract: Neural network (NN) interatomic potentials provide fast prediction of potential energy surfaces, closely matching the accuracy of the electronic structure methods used to produce the training data. However, NN predictions are only reliable within well-learned training domains, and show volatile behavior when extrapolating. Uncertainty quantification methods can flag atomic configurations for which prediction confidence is low, but arriving at such uncertain regions requires expensive sampling of the NN phase s… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
61
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 60 publications
(61 citation statements)
references
References 72 publications
0
61
0
Order By: Relevance
“…We use the SchNet [59], PaiNN [36], Allegro [10], and SpookyNet [37] models. Model implementations are from the NeuralForceField repository [34,60,61] and the Allegro repository [10]. Model sizes (w in Equation 6) were varied between 16, 64, and 256, while the number of layers/convolutions (d in Equation 6) was chosen to be 2, 3, or 4.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We use the SchNet [59], PaiNN [36], Allegro [10], and SpookyNet [37] models. Model implementations are from the NeuralForceField repository [34,60,61] and the Allegro repository [10]. Model sizes (w in Equation 6) were varied between 16, 64, and 256, while the number of layers/convolutions (d in Equation 6) was chosen to be 2, 3, or 4.…”
Section: Methodsmentioning
confidence: 99%
“…where α E and α F are coefficients that determine the relative weighting of energy and force predictions during training [34]. For scaling experiments we use the L1 loss or mean absolute error,…”
Section: Mainmentioning
confidence: 99%
“…To bypass these forward simulations, we developed an inverse sampling strategy that chooses the most informative geometries to annotate with ground-truth calculations. 40 The approach is based on adversarial attacks, a concept developed in ML for image classification. 41 By computing the gradient of the error with respect to the input and performing gradient ascent to modify the input, one generates a new image with maximal model error.…”
Section: Differentiable Uncertainty For Active Learningmentioning
confidence: 99%
“…Particularly for high-dimensional systems, this computational overhead might negate some of the benefits provided by the NNPs. To bypass these forward simulations, we developed an inverse sampling strategy that chooses the most informative geometries to annotate with ground-truth calculations . The approach is based on adversarial attacks, a concept developed in ML for image classification .…”
Section: Enhanced Atomistic Simulationmentioning
confidence: 99%
“…Assessing and benchmarking the robustness of ML or DL approaches by a series of adversarial attacks are popular in the image classification domain [20], but there are others that are closer to the domain of molecular data. In [21], the authors provide a series of realistic adversarial attacks to benchmark methods that predict chemical properties from atomistic simulations e.g., molecular conformation, reactions, and phase transitions. Even closer to the subject of our paper -protein sequences -the authors of [22] show that methods, such as AlphaFold [23] and RoseTTAFold [24] which employ deep neural networks to predict protein conformation are not robust: producing drastically different protein structures as a result of very small biologically meaningful perturbations in the protein sequence.…”
Section: Related Workmentioning
confidence: 99%