2021
DOI: 10.1016/j.physleta.2021.127621
|View full text |Cite
|
Sign up to set email alerts
|

Highly accurate machine learning model for kinetic energy density functional

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 39 publications
0
7
0
Order By: Relevance
“…Because their explicit form typically is not known, they do not fit easily into the one-point, two-point categorization, though some are built specifically to have a certain maximum order of density gradients. Just as machine learning itself is relatively recent, so also the literature of ML KEDFs is comparatively recent and small. ,, …”
Section: Introductionmentioning
confidence: 99%
“…Because their explicit form typically is not known, they do not fit easily into the one-point, two-point categorization, though some are built specifically to have a certain maximum order of density gradients. Just as machine learning itself is relatively recent, so also the literature of ML KEDFs is comparatively recent and small. ,, …”
Section: Introductionmentioning
confidence: 99%
“…A radically different use of ML consists of improving the theory at its core instead of targeting the DFT outputs. For instance, ML can be used to numerically design new energy density functionals, effectively producing fully exchange and correlation energies [46][47][48][49][50][51], and kinetic energy densities [52][53][54][55]. These strategies, in general, seek to find more accurate approximations to the DFT energy, going beyond the current approximations [56], or to eliminate the need of introducing the KS construct by replacing the self-consistent KS equations with a direct minimization of the functional [6].…”
Section: Introductionmentioning
confidence: 99%
“…In physics-guided machine learning implementations, it is essential to set-in advance-a vector space (in term of features) to which the targeted quantities should be projected [24]. The commonly used spaces are based on GGA and meta-GGA [25,26], kernels [27][28][29][30], and special expansions [31]. It is also important to mention that there are other implementations, which are not principally physics-guided and are mostly based on neural networks (NN) [32,33].…”
Section: Introductionmentioning
confidence: 99%