2018 International Automatic Control Conference (CACS) 2018
DOI: 10.1109/cacs.2018.8606740
|View full text |Cite
|
Sign up to set email alerts
|

An Actor-Critic Reinforcement Learning Control Approach for Discrete-Time Linear System with Uncertainty

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 18 publications
0
2
0
Order By: Relevance
“…A more recent successful attempt was the study done by [16] where they applied a stochastic real-valued reinforcement learning control to a non-linear quarter model. A similar approach to the one considered in this research was conducted by [17]; the actor critic networks were trained by the policy gradient method, and the controller was tested to some extent with the same road profile considered in this study. They compared their work with the passive suspension system and showed 62% improvement.…”
Section: Introductionmentioning
confidence: 99%
“…A more recent successful attempt was the study done by [16] where they applied a stochastic real-valued reinforcement learning control to a non-linear quarter model. A similar approach to the one considered in this research was conducted by [17]; the actor critic networks were trained by the policy gradient method, and the controller was tested to some extent with the same road profile considered in this study. They compared their work with the passive suspension system and showed 62% improvement.…”
Section: Introductionmentioning
confidence: 99%
“…The main idea of reinforcement learning is to develop the suspension environment that interacts with the agent throughout the learning phase, the objective of which is to maximize the reward function to achieve the best neural network performance. The results obtained by the articles 7 are optimal compared to the Linear Quadratic Gaussian (LQG), and show an improvement of 62% compared to the passive suspension.…”
Section: Introductionmentioning
confidence: 99%