This paper focuses on the active flow control of a computational fluid dynamics simulation over a range of Reynolds numbers using deep reinforcement learning (DRL). More precisely, the proximal policy optimization (PPO) method is used to control the mass flow rate of four synthetic jets symmetrically located on the upper and lower sides of a cylinder immersed in a two-dimensional flow domain. The learning environment supports four flow configurations with Reynolds numbers 100, 200, 300, and 400, respectively. A new smoothing interpolation function is proposed to help the PPO algorithm learn to set continuous actions, which is of great importance to effectively suppress problematic jumps in lift and allow a better convergence for the training process. It is shown that the DRL controller is able to significantly reduce the lift and drag fluctuations and actively reduce the drag by ∼5.7%, 21.6%, 32.7%, and 38.7%, at Re = 100, 200, 300, and 400, respectively. More importantly, it can also effectively reduce drag for any previously unseen value of the Reynolds number between 60 and 400. This highlights the generalization ability of deep neural networks and is an important milestone toward the development of practical applications of DRL to active flow control.
Deep Reinforcement Learning (DRL) has recently been proposed as a methodology to discover complex Active Flow Control (AFC) strategies [Rabault, J., Kuchta, M., Jensen, A., Réglade, U., & Cerardi, N. (2019): "Artificial neural networks trained through deep reinforcement learning discover control strategies for active flow control", Journal of Fluid Mechanics, 865, 281-302]. However, while promising results were obtained on a simple 2D benchmark flow at a moderate Reynolds number, considerable speedups will be required to investigate more challenging flow configurations. In the case of DRL trained with Computational Fluid Dynamics (CFD) data, it was found that the CFD part, rather than training the Artificial Neural Network, was the limiting factor for speed of execution. Therefore, speedups should be obtained through a combination of two approaches. The first one, which is well documented in the literature, is to parallelize the numerical simulation itself. The second one is to adapt the DRL algorithm for parallelization. Here, a simple strategy is to use several independent simulations running in parallel to collect experiences faster. In the present work, we discuss this solution for parallelization. We illustrate that perfect speedups can be obtained up to the batch size of the DRL agent, and slightly suboptimal scaling still takes place for an even larger number of simulations. This is, therefore, an important step towards enabling the study of more sophisticated Fluid Mechanics problems through DRL.
Deep Reinforcement Learning (DRL) has recently spread into a range of domains within physics and engineering, with multiple remarkable achievements. Still, much remains to be explored before the capabilities of these methods are well understood. In this paper, we present the first application of DRL to direct shape optimization. We show that, given adequate reward, an artificial neural network trained through DRL is able to generate optimal shapes on its own, without any prior knowledge and in a constrained time. While we choose here to apply this methodology to aerodynamics, the optimization process itself is agnostic to details of the use case, and thus our work paves the way to new generic shape optimization strategies both in fluid mechanics, and more generally in any domain where a relevant reward function can be defined.
Deep reinforcement learning (DRL) has recently been adopted in a wide range of physics and engineering domains for its ability to solve decisionmaking problems that were previously out of reach due to a combination of non-linearity and high dimensionality. In the last few years, it has spread in the field of computational mechanics, and particularly in fluid dynamics, with recent applications in flow control and shape optimization. In this work, we conduct a detailed review of existing DRL applications to fluid mechanics problems. In addition, we present recent results that further illustrate the potential of DRL in Fluid Mechanics. The coupling methods used in each case are covered, detailing their advantages and limitations. Our review also focuses on the comparison with classical methods for optimal control and optimization. Finally, several test cases
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.