2021
DOI: 10.1016/j.jcp.2020.109821
|View full text |Cite
|
Sign up to set email alerts
|

Neural network enhanced computations on coarse grids

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 17 publications
0
8
0
Order By: Relevance
“…This procedure was developed in [31], where it was used to increase the convergence rate to steady state. It has also been used on linear problems to control error growth [34] and to aid coarsegrid computations [32]. In Article IV, we use the MPT technique together with data from the wall model to improve the coarse-grid results for the nonlinear INS equations.…”
Section: Article IVmentioning
confidence: 99%
“…This procedure was developed in [31], where it was used to increase the convergence rate to steady state. It has also been used on linear problems to control error growth [34] and to aid coarsegrid computations [32]. In Article IV, we use the MPT technique together with data from the wall model to improve the coarse-grid results for the nonlinear INS equations.…”
Section: Article IVmentioning
confidence: 99%
“…In Paper I, we showed that for linear problems it was not sufficient to apply only dissipative boundary conditions along the physical and subdomain boundaries. To guarantee bounded energy for linear problems we introduced linear penalty terms [17,18] to the conservation laws in each domain to enforce two-way coupling.…”
Section: The Overset Domain Problemmentioning
confidence: 99%
“…Deep neural networks are a system of interconnected computational nodes loosely based on biological neural networks and, mathematically, can be formulated as compositional functions [7,32]. In contrast to shallow neural networks, which are networks with just a single hidden layer, these NNs are composed of two or more hidden layers [3].…”
Section: Deep Neural Networkmentioning
confidence: 99%
“…Autodiff is a technique that is used in PINNs to compute the partial derivatives of the NN approximations and thus embed the governing PDEs and associated boundary conditions in the loss function. Given that it facilitates "mesh-less" numerical computations of derivatives, it endows PINNs with several advantages over traditional numerical discretisation approaches for solving PDEs (such as the finite difference and finite element methods) that can be computationally expensive due to complex mesh-generation [7,32,40]. For example, Refs.…”
Section: Physics-informed Neural Networkmentioning
confidence: 99%
See 1 more Smart Citation