2021
DOI: 10.1063/5.0066049
|View full text |Cite
|
Sign up to set email alerts
|

Rotational and reflectional equivariant convolutional neural network for data-limited applications: Multiphase flow demonstration

Abstract: This article deals with approximating steady-state particle-resolved fluid flow around a fixed particle of interest under the influence of randomly distributed stationary particles in a dispersed multiphase setup using convolutional neural network (CNN). The considered problem involves rotational symmetry about the mean velocity (streamwise) direction. Thus, this work enforces this symmetry using SE(3)-equivariant, special Euclidean group of dimension 3, CNN architecture, which is translation and three-dimensi… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 21 publications
(13 citation statements)
references
References 30 publications
0
13
0
Order By: Relevance
“…Another class of methods pertains to the customized neural network architectures that encode the prior physical or mathematical knowledge as hard constraints. Some of the examples of this methods applied in fluid dynamics are tensor basis neural network [42], transformation invariant neural network [25], physics-embedded neural network [43], spatial transformer [44,45], and equivariant networks [46,47].…”
Section: Introductionmentioning
confidence: 99%
“…Another class of methods pertains to the customized neural network architectures that encode the prior physical or mathematical knowledge as hard constraints. Some of the examples of this methods applied in fluid dynamics are tensor basis neural network [42], transformation invariant neural network [25], physics-embedded neural network [43], spatial transformer [44,45], and equivariant networks [46,47].…”
Section: Introductionmentioning
confidence: 99%
“…The smaller number of parameters suggests that an equivariant model is trainable with a smaller size of data. 15 This suggestion was confirmed in Appendix D 2. Fig.…”
Section: Resultsmentioning
confidence: 54%
“…A smaller number of trainable parameters implies that an equivariant model is trainable with a smaller size of training data. 15 Fig. 19 shows the NERs against the various sizes of training data.…”
Section: Dependency On the Size Of Training Datamentioning
confidence: 99%
See 2 more Smart Citations