2020
DOI: 10.3390/en13225920
|View full text |Cite
|
Sign up to set email alerts
|

Deep Reinforcement Learning Control of Cylinder Flow Using Rotary Oscillations at Low Reynolds Number

Abstract: We apply deep reinforcement learning to active closed-loop control of a two-dimensional flow over a cylinder oscillating around its axis with a time-dependent angular velocity representing the only control parameter. Experimenting with the angular velocity, the neural network is able to devise a control strategy based on low frequency harmonic oscillations with some additional modulations to stabilize the Kármán vortex street at a low Reynolds number Re=100. We examine the convergence issue for two reward func… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
13
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 36 publications
(14 citation statements)
references
References 43 publications
1
13
0
Order By: Relevance
“…The fifth article, written by Tokarev et al [35], is entitled "Deep reinforcement learning control of cylinder flow using rotary oscillations at low Reynolds number". The authors focus on reducing the drag in a two-dimensional cylinder, which oscillates around its axis with time-dependent angular velocity.…”
Section: Summary Of the Contributionsmentioning
confidence: 99%
“…The fifth article, written by Tokarev et al [35], is entitled "Deep reinforcement learning control of cylinder flow using rotary oscillations at low Reynolds number". The authors focus on reducing the drag in a two-dimensional cylinder, which oscillates around its axis with time-dependent angular velocity.…”
Section: Summary Of the Contributionsmentioning
confidence: 99%
“…The main framework of the RL consists of an agent (e.g., a neural network in deep RL) that interacts with an environment to learn a policy that will maximize the cumulative reward over a long time horizon [315]. In recent years, the RL has been explored for fluid dynamics problems including animal locomotion [116,279,339], control of chaotic dynamics [41,59,337], drag reduction of bluff bodies [271,282,330], flow separation control [307], and turbulence closure modeling [242]. Along with a computer simulation environment, RL has been effectively applied for active flow control around bluff bodies in an experimental setup [98].…”
Section: Big Data Cyberneticsmentioning
confidence: 99%
“…The main framework of the RL consists of an agent (for example, a neural network in deep RL) that interacts with an environment to learn a policy that will maximize the cumulative reward over a long time horizon [333] . In recent years, the RL has been explored for fluid dynamics problems including animal locomotion [334][335][336] , control of chaotic dynamics [337][338][339] , drag reduction of bluff bodies [340][341][342] , flow separation control [343] , and turbulence closure modeling [344] . Along with a computer simulation environment, RL has been effectively applied for active flow control around bluff bodies in an experimental setup [345] .…”
Section: Figure 3 An Overview Of Big Data Cyberneticsmentioning
confidence: 99%