2014
DOI: 10.1002/qua.24836
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive machine learning framework to accelerate ab initio molecular dynamics

Abstract: Quantum mechanics-based ab initio molecular dynamics (MD) simulation schemes offer an accurate and direct means to monitor the time evolution of materials. Nevertheless, the expensive and repetitive energy and force computations required in such simulations lead to significant bottlenecks. Here, we lay the foundations for an accelerated ab initio MD approach integrated with a machine learning framework. The proposed algorithm learns from previously visited configurations in a continuous and adaptive manner on-… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
374
0
2

Year Published

2015
2015
2021
2021

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 395 publications
(376 citation statements)
references
References 54 publications
0
374
0
2
Order By: Relevance
“…In practice, addressing the former drawback is less important since one typically knows beforehand which ranges of interatomic distances and chemical compositions are relevant to the chemical problem at hand: It is straightforward to define the appropriate domain of applicability for the application of supervised ML models in chemistry. In recent years, much work has been devoted to tackle the a) Electronic mail: anatole.vonlilienfeld@unibas.ch latter drawback, through the discovery and development of improved representations M. [9][10][11][12][13][14][15][16] For large N, errors of ML models have been found to decay inverse powers of N, 2 implying a linear relationship, log(Error) = a − b log(N). Therefore, the best representation must (i) minimize the off-set a and (ii) preserve the linearity in the second term while maximizing its pre-factor b.…”
mentioning
confidence: 99%
“…In practice, addressing the former drawback is less important since one typically knows beforehand which ranges of interatomic distances and chemical compositions are relevant to the chemical problem at hand: It is straightforward to define the appropriate domain of applicability for the application of supervised ML models in chemistry. In recent years, much work has been devoted to tackle the a) Electronic mail: anatole.vonlilienfeld@unibas.ch latter drawback, through the discovery and development of improved representations M. [9][10][11][12][13][14][15][16] For large N, errors of ML models have been found to decay inverse powers of N, 2 implying a linear relationship, log(Error) = a − b log(N). Therefore, the best representation must (i) minimize the off-set a and (ii) preserve the linearity in the second term while maximizing its pre-factor b.…”
mentioning
confidence: 99%
“…A new scheme is presented that systematically learns in an interpolative manner to predict atomic forces in environments encountered during the dynamical evolution of materials from a set of high-level calculations performed on reference atomic configurations with modest system sizes. This concept is resonant with emerging data-driven (or "big data" [2][3][4][5][6] ) approaches aimed at materials discovery in general 7,8 , as well as at accelerating materials simulations [9][10][11][12][13] . Machine learning (ML) methods using neural networks 9,10 and Gaussian processes 11,12 have been successful in the development of interatomic potentials, wherein the potential energy surface is learned from a set of higher-level (quantum mechanics based) reference calculations.…”
mentioning
confidence: 99%
“…The distinctive aspect of the present contribution, namely, learning to predict atomic forces directly from past data has been suggested only recently 12,13 (to accelerate ab initio MD simulations on-the-fly). Here, we propose the creation of a stand-alone purely data-driven force prediction recipe (devoid of any explicit functional form), that can also provide the underlying potential energy surface (through integration).…”
mentioning
confidence: 99%
See 2 more Smart Citations