2023
DOI: 10.1109/lra.2023.3281290
|View full text |Cite
|
Sign up to set email alerts
|

Learning Complex Motor Skills for Legged Robot Fall Recovery

Abstract: Falling is inevitable for legged robots in challenging real-world scenarios, where environments are unstructured and situations are unpredictable, such as uneven terrain in the wild. Hence, to recover from falls and achieve all-terrain traversability, it is essential for intelligent robots to possess the complex motor skills required to resume operation. To go beyond the limitation of handcrafted control, we investigated a deep reinforcement learning approach to learn generalized feedback-control policies for … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2024
2024
2025
2025

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 24 publications
0
3
0
Order By: Relevance
“…Ref. [9] proposed a design guideline for selecting key states for initialization and showed that the learned fall recovery policies are hardwarefeasible and can be implemented on real robots. Furthermore, DRL has been used to learn fall recovery for humanoid character animation in physics simulation [10].…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Ref. [9] proposed a design guideline for selecting key states for initialization and showed that the learned fall recovery policies are hardwarefeasible and can be implemented on real robots. Furthermore, DRL has been used to learn fall recovery for humanoid character animation in physics simulation [10].…”
Section: Related Workmentioning
confidence: 99%
“…An alternative for obtaining fall recovery motions is model-free reinforcement learning (RL), where an agent interacts with its environment and learns the control policy through trial and error. Deep Reinforcement Learning (DRL) has shown its successful use for learning fall recovery policies in both simulation and real-world robots [9,10]. The significant advantages of using RL are that it requires less prior knowledge from human experts and is less labor intensive compared to manual handcrafting; the trained neural network is a feedback policy that can fast compute actions in real-time, compared to the optimization-based methods [9].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation