2021
DOI: 10.1088/2632-2153/abe294
|View full text |Cite
|
Sign up to set email alerts
|

Exploration of transferable and uniformly accurate neural network interatomic potentials using optimal experimental design

Abstract: Machine learning has been proven to have the potential to bridge the gap between the accuracy of ab initio methods and the efficiency of empirical force fields. Neural networks are one of the most frequently used approaches to construct high-dimensional potential energy surfaces. Unfortunately, they lack an inherent uncertainty estimation which is necessary for efficient and automated sampling through the chemical and conformational space to find extrapolative configurations. The identification of the latter i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
32
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 14 publications
(32 citation statements)
references
References 31 publications
0
32
0
Order By: Relevance
“…Applying the GP posterior transformation to the last-layer kernel yields an equivalent AL method to the one based on OED. 19 However, in our experimental results, the application of random projections to the full gradient kernel leads to better uncertainty estimates than OED-uncertainties. Finally, various selection algorithms are introduced, ranging from the naive selection of multiple samples with the highest informativeness to more elaborated algorithms, which satisfy (1)-(3).…”
Section: Introductionmentioning
confidence: 61%
See 3 more Smart Citations
“…Applying the GP posterior transformation to the last-layer kernel yields an equivalent AL method to the one based on OED. 19 However, in our experimental results, the application of random projections to the full gradient kernel leads to better uncertainty estimates than OED-uncertainties. Finally, various selection algorithms are introduced, ranging from the naive selection of multiple samples with the highest informativeness to more elaborated algorithms, which satisfy (1)-(3).…”
Section: Introductionmentioning
confidence: 61%
“…( 21), i.e. σ 2 ( E| D) = k ( S, S| D), which is equivalent to the results obtained in the OED framework, 19 if last-layer gradient features are used. A naive BMDAL approach would select N batch samples with the highest uncertainty.…”
Section: Gaussian Process Posterior Transformation -Gp(ll) and Gp(rp)mentioning
confidence: 84%
See 2 more Smart Citations
“…While our improved NN allows for convenient training on a given data set, it is especially useful when building the training data set on-the-fly, e.g., during a molecular dynamics (MD) simulation, or offline, drawing new training structures from an unlabeled data set. These workflows are referred to as learning-on-the-fly or active learning, , respectively. They reduce the number of required ab initio calculations but require frequent retraining of the model from updated training sets.…”
Section: Introductionmentioning
confidence: 99%