2020
DOI: 10.1063/5.0013059
|View full text |Cite
|
Sign up to set email alerts
|

An entropy-maximization approach to automated training set generation for interatomic potentials

Abstract: Machine learning-based interatomic potentials are currently garnering a lot of attention as they strive to achieve the accuracy of electronic structure methods at the computational cost of empirical potentials. Given their generic functional forms, the transferability of these potentials is highly dependent on the quality of the training set, the generation of which can be highly labor-intensive. Good training sets should at once contain a very diverse set of configurations while avoiding redundancies that inc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
20
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 26 publications
(21 citation statements)
references
References 33 publications
1
20
0
Order By: Relevance
“…Entropy-maximization techniques 30,31 help to partially overcome these problems by maximizing the structural diversity of a data set. When acquiring new data, these methods are focused on the structural dissimilarity compared to the existing data.…”
Section: Mainmentioning
confidence: 99%
“…Entropy-maximization techniques 30,31 help to partially overcome these problems by maximizing the structural diversity of a data set. When acquiring new data, these methods are focused on the structural dissimilarity compared to the existing data.…”
Section: Mainmentioning
confidence: 99%
“…Generation of reference structures suitable for tuning machine learning models has been explored in numerous studies [26,27,37,43,124,137,138,139,140,141,142,123,125,126,143,127,144]. Ab initio MD has been a particularly popular approach to sample physically meaningful configurations [26,27].…”
Section: Maise-net: Automated Generator Of Neural Networkmentioning
confidence: 99%
“…A similar approach was developed by Dolgirev et al [145]. Several strategies to improve the mapping of configuration spaces have been developed in recent years, e.g., normal mode sampling [142], active learning-based models [125,126], enhanced sampling [127], ab initio random structure searching [123,124], and entropy-maximization approach [143]. A number of studies have shown the benefit of iterating the generation of data and the parameterization of models [26,34,41,123,124,125,126,144].…”
Section: Maise-net: Automated Generator Of Neural Networkmentioning
confidence: 99%
“…We will discuss in this paper how to construct and utilize structures that are not energetically near the equilibrium structure. An approach in that direction is the recent work from M. Karabin and D. Perez [16]. Their work aims to sample the descriptor space of the targeted potential model as widely and unbiased as possible.…”
Section: Introductionmentioning
confidence: 99%