2022
DOI: 10.1016/j.combustflame.2022.112425
|View full text |Cite
|
Sign up to set email alerts
|

Criteria to switch from tabulation to neural networks in computational combustion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 29 publications
0
2
0
Order By: Relevance
“…The DNN requires only 2% of the QFM's memory, while the computational cost remains similar. However, the performance heavily depends on DNN architecture [48] and hardware, which will be the subject of future studies. This work can serve as the basis to investigate more complex flame configurations in future works, involving aspects such as differential diffusion and stretch effects, turbulent combustion, or hydrogen/hydrocarbon fuel blends.…”
Section: Discussionmentioning
confidence: 99%
“…The DNN requires only 2% of the QFM's memory, while the computational cost remains similar. However, the performance heavily depends on DNN architecture [48] and hardware, which will be the subject of future studies. This work can serve as the basis to investigate more complex flame configurations in future works, involving aspects such as differential diffusion and stretch effects, turbulent combustion, or hydrogen/hydrocarbon fuel blends.…”
Section: Discussionmentioning
confidence: 99%
“…Data-based methods on the other hand (Maulik & San 2017; Nikolaou & Vervisch 2018) require high-quality and relatively large amounts of data. In addition, their computational efficiency depends strongly on the structure (number of layers, number of neurons) of the network, and there exist in fact bounds above which a neural network will always underperform (more floating-point operations) in comparison to using a simple tabulation approach (Nikolaou, Vervisch & Domingo 2022). One solution to this problem is to use approximate reconstruction operators derived from truncated Taylor-series expansions of the inverse filter operator.…”
Section: Introductionmentioning
confidence: 99%