2014
DOI: 10.1109/mcse.2014.75
|View full text |Cite
|
Sign up to set email alerts
|

Scalable Implicit Flow Solver for Realistic Wing Simulations with Flow Control

Abstract: An active flow control application on a realistic wing design could be leveraged by a scalable, fully implicit, unstructured, finite-element flow solver and high-performance computing resources. This article describes the active flow control application; summarizes the main features in the implementation of a massively parallel turbulent flow solver, PHASTA; and demonstrates the method's strong scalability at extreme scale.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
21
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 51 publications
(22 citation statements)
references
References 12 publications
1
21
0
Order By: Relevance
“…The largest scale run to date used 256K MPI processes on Argonne National Laboratory's BlueGene/Q Mira machine [RSC*14]. The scaling studies utilized Parallel Hierarchic Adaptive Stabilized Transient Analysis (PHASTA), a highly scalable CFD code, developed by Kenneth Jansen at UC Boulder, for simulating active flow control on complex wing design (see Figure ).…”
Section: In Depth Analysis Of Four In Situ Infrastructuresmentioning
confidence: 99%
“…The largest scale run to date used 256K MPI processes on Argonne National Laboratory's BlueGene/Q Mira machine [RSC*14]. The scaling studies utilized Parallel Hierarchic Adaptive Stabilized Transient Analysis (PHASTA), a highly scalable CFD code, developed by Kenneth Jansen at UC Boulder, for simulating active flow control on complex wing design (see Figure ).…”
Section: In Depth Analysis Of Four In Situ Infrastructuresmentioning
confidence: 99%
“…Elements may actually not be grouped at all and assembled into an unstructured mesh, some recent references being [27,31,46]. The connectivity of elements can be modeled as a graph, and the partitioning of elements between parallel processes can be translated into partitioning the graph.…”
Section: Introductionmentioning
confidence: 99%
“…Common application codes such as FEniCS [33], PLUM [35,36], OpenFOAM [37], or MOAB from the SIGMA toolkit [53] delegate graph partitioning to third-party software like ParMETIS [29] or Scotch [20]. Graph-based algorithms have been advanced to target millions of processes and billions of elements [18,42,49]. Still, increasing the scalability and decreasing the absolute runtime and memory demands of distributed implementations remains a challenge, and the lack of an obvious parent-child structure in many unstructured meshing approaches prevents certain use cases.…”
Section: Introductionmentioning
confidence: 99%
“…[20]. Graph-based algorithms have been advanced to target millions of processes and billions of elements [18,42,49]. Still, increasing the scalability and decreasing the absolute runtime and memory demands of distributed implementations remains a challenge, and the lack of an obvious parent-child structure in many unstructured meshing approaches prevents certain use cases.…”
mentioning
confidence: 99%