2022
DOI: 10.1126/scirobotics.abk2822
|View full text |Cite
|
Sign up to set email alerts
|

Learning robust perceptive locomotion for quadrupedal robots in the wild

Abstract: Legged robots that can operate autonomously in remote and hazardous environments will greatly increase opportunities for exploration into underexplored areas. Exteroceptive perception is crucial for fast and energy-efficient locomotion: Perceiving the terrain before making contact with it enables planning and adaptation of the gait ahead of time to maintain speed and stability. However, using exteroceptive perception robustly for locomotion has remained a grand challenge in robotics. Snow, vegetation, and wate… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
314
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 409 publications
(316 citation statements)
references
References 58 publications
1
314
1
Order By: Relevance
“…We spawn 4096 environments in parallel to learn all three tasks simultaneously in a single neural network. The number of environments per task is weighted according to their approximate difficulty, e.g., [1,1,5] in the case of the tasks described above. The state-transitions collected during the roll-outs of these environments are mapped using a function φ(s) such that it extracts the linear and angular base velocity, gravity direction in base frame, the base's height above ground, joint position and velocity, and finally the position of the wheels relative to the robot's base-frame, i.e., φ(s) = ( ẋbase , x z , e base , q, q, x ee,base ) ∈ R 50 .…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…We spawn 4096 environments in parallel to learn all three tasks simultaneously in a single neural network. The number of environments per task is weighted according to their approximate difficulty, e.g., [1,1,5] in the case of the tasks described above. The state-transitions collected during the roll-outs of these environments are mapped using a function φ(s) such that it extracts the linear and angular base velocity, gravity direction in base frame, the base's height above ground, joint position and velocity, and finally the position of the wheels relative to the robot's base-frame, i.e., φ(s) = ( ẋbase , x z , e base , q, q, x ee,base ) ∈ R 50 .…”
Section: Resultsmentioning
confidence: 99%
“…Reinforcement Learning (RL) had a significant impact in the space of legged locomotion, showcasing robust policies that can handle a wide variety of challenging terrain in the real world [1]. With this advancement, we believe that these articulated robots can perform specialized motions like their natural counterparts.…”
Section: Introductionmentioning
confidence: 99%
“…OpenAI trained Dactyl for a human-like robot hand to dextrously manipulate physical objects (OpenAI et al, 2018). A series of work has advanced the progress in quadrupedal robots locomotion over challenging terrains in the wild, integrating both exteroceptive and proprioceptive perception (Hwangbo et al, 2019;Lee et al, 2020;Miki et al, 2022). Peng et al (2018) present DeepMimic for simulated humanoid to perform highly dynamic and acrobatic skills.…”
Section: Roboticsmentioning
confidence: 99%
“…Sensory information from cameras and LiDAR reveals a great deal about the characteristics of terrain, and can be leveraged to anticipate its effects on the robot's dynamics. While researchers have recently started to incorporate visual information into gait planners for legged-robots [13], wheeled mobile robot motion planners that use visual information have been mostly limited to end-to-end learning solutions.…”
Section: B Error Modelling and Reactive Controlmentioning
confidence: 99%