2023
DOI: 10.1126/scirobotics.ade2256
|View full text |Cite
|
Sign up to set email alerts
|

Learning quadrupedal locomotion on deformable terrain

Abstract: Simulation-based reinforcement learning approaches are leading the next innovations in legged robot control. However, the resulting control policies are still not applicable on soft and deformable terrains, especially at high speed. The primary reason is that reinforcement learning approaches, in general, are not effective beyond the data distribution: The agent cannot perform well in environments that it has not experienced. To this end, we introduce a versatile and computationally efficient granular media mo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 56 publications
(15 citation statements)
references
References 38 publications
0
15
0
Order By: Relevance
“…Comparison with robot learning literature RL for robots has been studied for decades [see (5,6) for an overview] but has only recently gained more popularity because of the development of better hardware and algorithms (4,37). In particular, high-quality quadrupedal robots have become widely available, which have been used to demonstrate robust, efficient, and practical locomotion in a variety of environments (12,13,38,39). For example, Lee et al (12) applied zero-shot sim-to-real deep RL to deploy learned locomotion policies in natural environments, including mud, snow, vegetation, and streaming water.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Comparison with robot learning literature RL for robots has been studied for decades [see (5,6) for an overview] but has only recently gained more popularity because of the development of better hardware and algorithms (4,37). In particular, high-quality quadrupedal robots have become widely available, which have been used to demonstrate robust, efficient, and practical locomotion in a variety of environments (12,13,38,39). For example, Lee et al (12) applied zero-shot sim-to-real deep RL to deploy learned locomotion policies in natural environments, including mud, snow, vegetation, and streaming water.…”
Section: Discussionmentioning
confidence: 99%
“…In particular, deep reinforcement learning (deep RL) has proven capable of solving complex motor control problems for both simulated characters (7)(8)(9)(10)(11) and physical robots. High-quality quadrupedal legged robots have become widely available and have been used to demonstrate behaviors ranging from robust (12,13) and agile (14,15) locomotion to fall recovery (16); climbing (17); basic soccer skills such as dribbling (18,19), shooting (20), intercepting (21), or catching (22) a ball; and simple manipulation with legs (23). On the other hand, much less work has been dedicated to the control of humanoids and bipeds, which impose additional challenges around stability, robot safety, number of degrees of freedom, and availability of suitable hardware.…”
Section: Introductionmentioning
confidence: 99%
“…Although we can hope that a policy trained with other randomizations can generalize to these terrains, its performance is likely to be suboptimal. To address this, Choi et al (2023) augmented a simulator with a deformable terrain model and trained a quadruped robot to better adapt to the dynamics of soft and granular surfaces. Other work has expanded the locomotion task via manipulation of the reward function rather than manipulation of the simulated environments.…”
Section: Perspective On the Field And Future Directionsmentioning
confidence: 99%
“…In recent research, machine learning‐based methods have been proposed for implicitly representing terrain features (Choi et al, 2023; Lee et al, 2020). Such methods train a robust controller, that is dedicated to responding to terrain uncertainty by generating a large number of random terrains in a simulation environment.…”
Section: Related Workmentioning
confidence: 99%