2021
DOI: 10.1016/j.taml.2021.100280
|View full text |Cite
|
Sign up to set email alerts
|

End-to-end differentiable learning of turbulence models from indirect observations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
11
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 25 publications
(11 citation statements)
references
References 21 publications
0
11
0
Order By: Relevance
“…Such an algorithmic modification is crucial for accelerating convergence and improving robustness of the learning, which can make an otherwise intractable learning problem with the adjoint method (Michelén Ströfer & Xiao 2021) computationally feasible with the ensemble method. We show that, by incorporating Hessian information with adaptive stepping, the ensemble Kalman method exceeds the performance of the adjoint-based learning (Michelén Ströfer & Xiao 2021) in both accuracy and robustness. Specifically, the present method successfully learned a generalizable nonlinear eddy viscosity model for the separated flows over periodic hills (Section 4), which the adjoint method was not able to achieve due to the lack of robustness.…”
Section: Introductionmentioning
confidence: 94%
See 3 more Smart Citations
“…Such an algorithmic modification is crucial for accelerating convergence and improving robustness of the learning, which can make an otherwise intractable learning problem with the adjoint method (Michelén Ströfer & Xiao 2021) computationally feasible with the ensemble method. We show that, by incorporating Hessian information with adaptive stepping, the ensemble Kalman method exceeds the performance of the adjoint-based learning (Michelén Ströfer & Xiao 2021) in both accuracy and robustness. Specifically, the present method successfully learned a generalizable nonlinear eddy viscosity model for the separated flows over periodic hills (Section 4), which the adjoint method was not able to achieve due to the lack of robustness.…”
Section: Introductionmentioning
confidence: 94%
“…This is in stark contrast to the traditional method of training neural networks that learns from direct data (output of the neural network, i.e., Reynolds stresses in this case), where the gradients can be directly obtained from back-propagation. In model-consistent training, one must typically resort to adjoint solvers to obtain the RANS solver-contributed gradient (sensitivity of velocity with respect to Reynolds stresses), as the full model consists of both the neural network and the RANS solver (Holland et al 2019;Michelén Ströfer & Xiao 2021). The adjoint sensitivity is then multiplied to the neural network gradient according to the chain rule to yield the full gradient.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…Studies have shown that the RANS equation can be very sensitive to the Reynolds stresses [27], which poses an additional challenge to the closure modelling. An alternative strategy for improving the robustness is to incorporate the RANS solver in the training process, but that would introduce major challenges in the training process by requiring adjoint solver [28] or ensemble simulations [29].…”
Section: Major Challenges In Coupling Neural Network Model To Rans So...mentioning
confidence: 99%