2021
DOI: 10.1103/physreve.104.045303
|View full text |Cite
|
Sign up to set email alerts
|

Evolutional deep neural network

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
29
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 43 publications
(29 citation statements)
references
References 26 publications
0
29
0
Order By: Relevance
“…Furthermore, it should also be emphasized that we can introduce a loss function associated with a priori knowledge from physics since both NNs and LSE are based on a minimization manner in terms of weights. Inserting a physical loss function may be one of the considerable paths towards practical applications of both methods 34 36 .…”
Section: Discussionmentioning
confidence: 99%
“…Furthermore, it should also be emphasized that we can introduce a loss function associated with a priori knowledge from physics since both NNs and LSE are based on a minimization manner in terms of weights. Inserting a physical loss function may be one of the considerable paths towards practical applications of both methods 34 36 .…”
Section: Discussionmentioning
confidence: 99%
“…Closest to our model is the work in [6], which also proposes to integrate the DNN parametrizations of PDE solutions sequentially in time. However, Ref.…”
Section: Related Workmentioning
confidence: 99%
“…In this work we take a different approach. We develop time-integrators for PDEs that use DNNs to represent the solution but update the parameters sequentially from one time slice to another rather than globally over the whole time-space domain, as was also proposed in [6], thereby allowing for collecting data on the way and for integration over arbitrary long time intervals. The scheme uses the structural form of the PDEs, but no a priori data about their solution.…”
Section: Introductionmentioning
confidence: 99%
“…In this case, we learn to represent data at t 0 with parameters θ(t 0 ), and then demand that each point of time the distribution satisfies differential constraints for a PDE in question. This leads to model-dependent updates of variational parameters θ(t + ∆t) γ ← − θ(t) (with an update rule γ), thus evolving the model in discrete time [64]. Below, we show how to introduce model-dependent differential constraints, and training or evolving DQGM in both explicit and implicit manner.…”
Section: Model Differentiation and Constrained Training From Stochast...mentioning
confidence: 99%
“…Alternatively, we can use an evolutionary approach for updating circuit parameters [64]. In this case, the time-derivative of our model ∂ p θ,t (x)/∂t can be reexpressed using a chain rule as (∂ p θ,t (x)/∂θ)(∂θ/∂t).…”
Section: Model Differentiation and Constrained Training From Stochast...mentioning
confidence: 99%