2020
DOI: 10.1063/5.0015344
|View full text |Cite
|
Sign up to set email alerts
|

A Gaussian process regression adaptive density guided approach for potential energy surface construction

Abstract: Note: This paper is part of the JCP Special Topic on Machine Learning Meets Chemical Physics.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
29
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 23 publications
(29 citation statements)
references
References 95 publications
0
29
0
Order By: Relevance
“…One is to solve the overdetermined system of linear equations including derivatives of MLP as in so-called "operator quantum machine learning" with FCHL 44,95 (analogous to including derivatives in the loss function of NNs). Another one is explicitly including either or both covariances between functions and function derivatives in the system of linear equations 91 as was done using many potentials, such as GDML, 58 sGDML, 20,96 RKHS+F, 67 GPR with either XYZ 12,54 or internal coordinates, 15,17,57 FCHL, 44 and GAP (Gaussian approximation potential). 42 Explicit inclusion of covariances between function derivatives greatly increases the accuracy of MLPs, but the computational cost rapidly grows as the number of linear equations to solve increases with the number of force components in a molecule.…”
Section: Kernel Methods Potentialsmentioning
confidence: 99%
See 1 more Smart Citation
“…One is to solve the overdetermined system of linear equations including derivatives of MLP as in so-called "operator quantum machine learning" with FCHL 44,95 (analogous to including derivatives in the loss function of NNs). Another one is explicitly including either or both covariances between functions and function derivatives in the system of linear equations 91 as was done using many potentials, such as GDML, 58 sGDML, 20,96 RKHS+F, 67 GPR with either XYZ 12,54 or internal coordinates, 15,17,57 FCHL, 44 and GAP (Gaussian approximation potential). 42 Explicit inclusion of covariances between function derivatives greatly increases the accuracy of MLPs, but the computational cost rapidly grows as the number of linear equations to solve increases with the number of force components in a molecule.…”
Section: Kernel Methods Potentialsmentioning
confidence: 99%
“…Other global descriptors used in MLPs 15,17,40,[55][56][57] are internal coordinates, which are also traditional molecular representations in computational chemistry. They satisfy rotational and translational invariance, but their construction is not unique.…”
Section: Global Descriptorsmentioning
confidence: 99%
“…MLatom uses the very computationally efficient Cholesky decomposition by default. Bunch-Kaufman and LU decomposition approaches are also available, which are sometimes necessary when Cholesky decomposition fails [47]. The solution of the above system of equations also requires calculation of the kernel matrix of size N tr 2 , which can become very large and no longer fit in the available computer memory.…”
Section: Kernel Ridge Regressionmentioning
confidence: 99%
“…Note that the derivative information can also be included in the training data for KRR, which usually requires implementing higher-order derivatives of the kernel functions [26,37,39,[47][48][49][50]. Such implementations in MLatom are currently underway and will be released in the near future.…”
Section: Kernel Ridge Regressionmentioning
confidence: 99%
“…This approach proceeds by training a statistical technique on a relatively small set of data from ab initio calculations on the PES of interest, known as the training set. Many such techniques have been employed to predict the energy in these algorithms, including neural networks [9][10][11][12][13] , moment tensors [14][15][16] and Gaussian processes [17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34] (GPs).…”
Section: Introductionmentioning
confidence: 99%