2020
DOI: 10.1103/physrevfluids.5.084611
|View full text |Cite
|
Sign up to set email alerts
|

Formulating turbulence closures using sparse regression with embedded form invariance

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
49
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 74 publications
(49 citation statements)
references
References 41 publications
0
49
0
Order By: Relevance
“…In recent years, with the enhancement of computers' processing capability, the big data technology is developing rapidly and has been widely used in many branches of science and technology. Machine learning (ML) methods such as eXtreme gradient boosting (Xgboost), 38 support vector machine (SVM), 39 random forest, 40 and artificial neural network (ANN) 41‐46 have been increasingly used in the field of chemical engineering. It was around since the 1940s, 47 but it was not until the last two decades that the ANN was applied to engineering (e.g., the filtered drag development using ANN 32,41,46 ).…”
Section: Introductionmentioning
confidence: 99%
“…In recent years, with the enhancement of computers' processing capability, the big data technology is developing rapidly and has been widely used in many branches of science and technology. Machine learning (ML) methods such as eXtreme gradient boosting (Xgboost), 38 support vector machine (SVM), 39 random forest, 40 and artificial neural network (ANN) 41‐46 have been increasingly used in the field of chemical engineering. It was around since the 1940s, 47 but it was not until the last two decades that the ANN was applied to engineering (e.g., the filtered drag development using ANN 32,41,46 ).…”
Section: Introductionmentioning
confidence: 99%
“…We apply a newly formulated modelling methodology, sparse regression with embedded form invariance (Beetham & Capecelatro 2020), to highly resolved Eulerian-Lagrangian data for fully developed CIT. The benefits of this methodology as compared with Neural Networks, which have become increasingly popular, are (i) interpretability of the resultant closures, since they are in a closed algebraic formulation, (ii) ease of dissemination to existing RANS solvers and (iii) robustness to very sparse training sets.…”
Section: Discussionmentioning
confidence: 99%
“…The six-term model, in turn, reduces overall model error by more accurately describing both components; however, this is most pronounced in the cross-stream direction (see figures 4(b) and 4(d)). In addition to discovering compact, algebraic models, sparse regression is also robust to sparse training data (Beetham & Capecelatro 2020). To illustrate this, a model was discovered using a sparse training dataset corresponding to (Ar, α p ) = [(1.8, 0.05), (5.4, 0.001), (18.0, 2.55)] and then tested using the remaining six cases.…”
Section: Drag Productionmentioning
confidence: 99%
See 2 more Smart Citations