2023
DOI: 10.1017/jfm.2022.1020
|View full text |Cite
|
Sign up to set email alerts
|

Reinforcement-learning-based control of convectively unstable flows

Abstract: This work reports the application of a model-free deep reinforcement learning (DRL) based flow control strategy to suppress perturbations evolving in the one-dimensional linearised Kuramoto–Sivashinsky (KS) equation and two-dimensional boundary layer flows. The former is commonly used to model the disturbance developing in flat-plate boundary layer flows. These flow systems are convectively unstable, being able to amplify the upstream disturbance, and are thus difficult to control. The control action is implem… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 101 publications
0
6
0
Order By: Relevance
“…Paris, Beneddine & Dandois (2023) proposed an RL methodology to optimise actuator placement in a laminar 2-D flow around an aerofoil, addressing the trade-off between performance and the number of actuators. Xu & Zhang (2023) used RL to suppress instabilities in both the Kuramoto-Sivashinsky system and 2-D boundary layers, showing the effectiveness and robustness of RL control. Pino et al (2023) compared RL and genetic programming algorithms to global optimisation techniques for various cases, including the viscous Burger's equation and vortex shedding behind a 2-D cylinder.…”
Section: Model-free Active Flow Control By Reinforcement Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…Paris, Beneddine & Dandois (2023) proposed an RL methodology to optimise actuator placement in a laminar 2-D flow around an aerofoil, addressing the trade-off between performance and the number of actuators. Xu & Zhang (2023) used RL to suppress instabilities in both the Kuramoto-Sivashinsky system and 2-D boundary layers, showing the effectiveness and robustness of RL control. Pino et al (2023) compared RL and genetic programming algorithms to global optimisation techniques for various cases, including the viscous Burger's equation and vortex shedding behind a 2-D cylinder.…”
Section: Model-free Active Flow Control By Reinforcement Learningmentioning
confidence: 99%
“…Sonoda et al (2023) and Guastoni et al (2023) applied RL control in numerical simulations of turbulent channel flow, and showed that RL control can outperform opposition control in this complex flow control task. Some RL techniques have been applied also to various flow control problems with different geometries, such as flow past a 2-D cylinder , vortex-induced vibration of a 2-D square bluff body (Chen et al 2023), and a 2-D boundary layer (Xu & Zhang 2023). However, model-free RL control techniques also have several drawbacks compared to model-based control.…”
Section: Model-free Active Flow Control By Reinforcement Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…166 and 167) body around a larger one with the prospect of reducing the recirculation bubble/the drag downstream, to directly modify the shape of the principal body 52,[168][169][170] or to control its movements 102,153 with the same objective of optimizing its aerodynamic/hydrodynamic properties. Other research works handle heat transport Deep reinforcement learning for flow control: perspectives and future directions issues such as the Rayleigh-Bénard instability 127 , or other convectively-unstable flows 171 .…”
Section: Deep Reinforcement Learning and Active Flow Control: Challen...mentioning
confidence: 99%
“…An other example is the work of Gunnarson et al (2021) who explored the influence of the input data provided to the RL algorithm to observe the state of the environment, and compared these with optimal control, aware of the full flow field. Other studies of input data optimized the sensor layout feeding the algorithm (Paris et al, 2021; Xu and Zhang, 2023), or, in an experimental context, performed high-frequency filtering of the state to enable the agent to learn a successful policy (Fan et al, 2020). Another element of tremendous importance is the feedback signal provided to the RL algorithm indicating how well it performed.…”
Section: Introductionmentioning
confidence: 99%