2021
DOI: 10.1103/physrevlett.126.036401
|View full text |Cite
|
Sign up to set email alerts
|

Kohn-Sham Equations as Regularizer: Building Prior Knowledge into Machine-Learned Physics

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
152
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
2
1

Relationship

1
9

Authors

Journals

citations
Cited by 147 publications
(152 citation statements)
references
References 49 publications
0
152
0
Order By: Relevance
“…We train the model inside a standard numerical method for solving the underlying PDEs as a differentiable program, with the neural networks and the numerical method written in a framework [JAX ( 38 )] supporting reverse-mode automatic differentiation. This allows for end-to-end gradient-based optimization of the entire algorithm, similar to prior work on density functional theory ( 39 ), molecular dynamics ( 40 ), and fluids ( 33 , 34 ). The methods we derive are equation-specific and require high-resolution ground-truth simulations for training data.…”
mentioning
confidence: 97%
“…We train the model inside a standard numerical method for solving the underlying PDEs as a differentiable program, with the neural networks and the numerical method written in a framework [JAX ( 38 )] supporting reverse-mode automatic differentiation. This allows for end-to-end gradient-based optimization of the entire algorithm, similar to prior work on density functional theory ( 39 ), molecular dynamics ( 40 ), and fluids ( 33 , 34 ). The methods we derive are equation-specific and require high-resolution ground-truth simulations for training data.…”
mentioning
confidence: 97%
“…It adds to existing coarse-grained and statistical potentials used for simulation [ 47 50 ] and model scoring approaches [ 51 53 ]. More broadly, it points to DMS as a useful technology falling under the banner of differentiable programming [ 54 ], an expansion of the principles of deep learning to the concept of taking gradients through arbitrary algorithms and utilising the known structure of the system under study [ 55 , 56 ].…”
Section: Introductionmentioning
confidence: 99%
“…Indeed, machine-learning offers a new generation of accurate, highly non-local, xc functionals [7]. While those functionals are designed to perform tasks of different degree of complexity, all share the aim of learning one of the maps of DFT, namely, the Hohenberg-Kohn map between the external potential v(r) and the density ρ(r) [8][9][10][11][12][13], or the Kohn-Sham map between the density ρ(r) and the xc functional E xc [ρ(r)] and its functional derivative v xc [ρ(r)] = δE xc [ρ(r)]/δρ(r) [14][15][16].…”
Section: Introductionmentioning
confidence: 99%