2009 IEEE International Conference on Robotics and Automation 2009
DOI: 10.1109/robot.2009.5152795
|View full text |Cite
|
Sign up to set email alerts
|

Stereo vision and terrain modeling for quadruped robots

Abstract: Legged robots offer the potential to navigate highly challenging terrain, and there has recently been much progress in this area. However, a great deal of this recent work has operated under the assumption that either the robot has complete knowledge of its environment or that its environment is suitably regular so as to be navigated with only minimal perception, an unrealistic assumption in many real-world domains. In this paper we present an integrated perception and control system for a quadruped robot that… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
45
0

Year Published

2009
2009
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 64 publications
(45 citation statements)
references
References 23 publications
0
45
0
Order By: Relevance
“…Stereo vision has been used for terrain modeling with walking robots so far only in few projects, mostly because typical stereo systems impose high costs of 3D points computation. In the works of Kolter et al (2009) and Rusu et al (2009) walking robots with stereo-based perception are shown to autonomously traverse a rugged terrain, but in both cases the computations are performed off-board, and explicit propagation of the spatial uncertainty from the stereo data to the elevation map is not taken into account. The knowledge about the elevation uncertainty of each cell in the map allows planning the path of the walking robot more efficiently, while avoiding uncertain areas (Belter and Skrzypczyński, 2011b).…”
Section: Related Workmentioning
confidence: 99%
“…Stereo vision has been used for terrain modeling with walking robots so far only in few projects, mostly because typical stereo systems impose high costs of 3D points computation. In the works of Kolter et al (2009) and Rusu et al (2009) walking robots with stereo-based perception are shown to autonomously traverse a rugged terrain, but in both cases the computations are performed off-board, and explicit propagation of the spatial uncertainty from the stereo data to the elevation map is not taken into account. The knowledge about the elevation uncertainty of each cell in the map allows planning the path of the walking robot more efficiently, while avoiding uncertain areas (Belter and Skrzypczyński, 2011b).…”
Section: Related Workmentioning
confidence: 99%
“…Kolter et al in [13] took a step further in autonomy by removing the dependence on given maps and external state input. In their control framework they use a well established point-cloud matching technique to iteratively build a map of their environment and afterwards navigate in it.…”
Section: Related Workmentioning
confidence: 99%
“…Miller (2002) fused the measurements from a laser range finder and a calibrated camera mounted on a helicopter to construct terrain models. Another camera-based approach was proposed recently by Kolter, Kim, and Ng (2009), who equipped a quadruped robot with a stereo-camera system and dealt with the problem of how to infer a dense elevation map of the terrain from the sparse stereo correspondences. Popular approaches from the computer vision literature that do not require a stereo setup are to extract 3D shape from shading (Bors, Hancock, & Wilson, 2003) or from shadows (Daum & Dudek, 1998).…”
Section: Related Workmentioning
confidence: 99%