2021
DOI: 10.1088/1674-1056/abd74f
|View full text |Cite
|
Sign up to set email alerts
|

Control of chaos in Frenkel–Kontorova model using reinforcement learning*

Abstract: It is shown that we can control spatiotemporal chaos in the Frenkel–Kontorova (FK) model by a model-free control method based on reinforcement learning. The method uses Q-learning to find optimal control strategies based on the reward feedback from the environment that maximizes its performance. The optimal control strategies are recorded in a Q-table and then employed to implement controllers. The advantage of the method is that it does not require an explicit knowledge of the system, target states, and unsta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 30 publications
0
3
0
Order By: Relevance
“…Similarly, most of the existing methods on the anti-control of chaos face the problem of requiring explicit knowledge on systems in real world applications. We demonstrated that control of chaos using reinforcement learning is model-free and easy to employ in the FK model [16]. As far as we know, the method of anti-control of chaos using reinforcement learning has not been reported.…”
mentioning
confidence: 94%
See 1 more Smart Citation
“…Similarly, most of the existing methods on the anti-control of chaos face the problem of requiring explicit knowledge on systems in real world applications. We demonstrated that control of chaos using reinforcement learning is model-free and easy to employ in the FK model [16]. As far as we know, the method of anti-control of chaos using reinforcement learning has not been reported.…”
mentioning
confidence: 94%
“…They showed that this method not only can control high-dimensional discrete systems, 1-D and 2-D coupled logistic map lattices [14], but also can attack the targeting problem in a complex multi-stable system through guiding its trajectory to a metastable state [15]. Lei and Han successfully applied the method with Q-learning to the control of chaos in the Frenkel-Kontorova model [16]. The goal of reinforcement learning is to maximize the cumulative rewards, which determine whether the agent can ultimately learn the desired goal-stabilizing an unstable periodic orbit embedded in the chaotic attractor.…”
mentioning
confidence: 99%
“…Vashishtha and Verma (2020) used the Proximal Policy Optimization (PPO) to restore chaos in the Lorenz system and showed that a simple control-law can be identified from the agent’s autonomous control-strategies. Additionally, Lei and Han (2021) successfully controlled spatiotemporal chaos in the Frenkel–Kontorova model by using RL. More importantly, Wang et al (2020) achieved attractor selection under control constraints using two different DRL methods.…”
Section: Introductionmentioning
confidence: 99%