2019
DOI: 10.3390/app9030502
|View full text |Cite
|
Sign up to set email alerts
|

Learning an Efficient Gait Cycle of a Biped Robot Based on Reinforcement Learning and Artificial Neural Networks

Abstract: Programming robots for performing different activities requires calculating sequences of values of their joints by taking into account many factors, such as stability and efficiency, at the same time. Particularly for walking, state of the art techniques to approximate these sequences are based on reinforcement learning (RL). In this work we propose a multi-level system, where the same RL method is used first to learn the configuration of robot joints (poses) that allow it to stand with stability, and then in … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 36 publications
(22 citation statements)
references
References 24 publications
0
18
0
1
Order By: Relevance
“…The platform is employed here to simulate the complex and dynamic environment, where oscillations are introduced to the system to imitate the real external disturbances. Compared with other studies where the robots are trained on a flat surface or a platform with fixed angle under the experimental environment [15][16][17], the proposed platform is able to provide a more complex and dynamic environment where the robot is capable of learning a more robust and efficient controller. Thus, the experiments in this paper will not only show the convergence of the learning procedure, but also involve the robustness of the learned controller to adapt to different complex and dynamic environments.…”
Section: Formulation Of the Problemmentioning
confidence: 99%
See 1 more Smart Citation
“…The platform is employed here to simulate the complex and dynamic environment, where oscillations are introduced to the system to imitate the real external disturbances. Compared with other studies where the robots are trained on a flat surface or a platform with fixed angle under the experimental environment [15][16][17], the proposed platform is able to provide a more complex and dynamic environment where the robot is capable of learning a more robust and efficient controller. Thus, the experiments in this paper will not only show the convergence of the learning procedure, but also involve the robustness of the learned controller to adapt to different complex and dynamic environments.…”
Section: Formulation Of the Problemmentioning
confidence: 99%
“…Many attempts based on model-free RL frameworks have been made recently to involve RL into biped robot walking control to avoid calculating the mathematical model. Gil [15] utilized Q-Learning to find a sequence of pose that allows a NAO robot to reach the furthest distance in the shortest time, while still keeping a straight path without falling down. However, the actions were discrete, thus it lacked a smooth transfer between two poses.…”
Section: Introductionmentioning
confidence: 99%
“…Even after designing the bipedal stability and smooth trajectories, online strategies (Table 3) are required for the stable landing of the foot on the ground and avoid sudden jerks while walking which will harm the bipedal [10][11][12][13]. The force/ torque sensor at ankle results in sustained oscillations in SSP which is overcome by damping oscillator parameters.…”
Section: Proposed Model Of Bipedal Robotmentioning
confidence: 99%
“…Reinforcement learning can be applied to efficient gait control of a biped robot. Gil et al [37] showed a reinforcement learning mechanism to handle stability and efficiency of movement, thus improving speed and precision of the trajectory. Yang et al [38] showed an interesting work to transform the complex motion of robot turning into a simple translational motion.…”
Section: Advanced Mobile Roboticsmentioning
confidence: 99%