IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA '04. 2004 2004
DOI: 10.1109/robot.2004.1307213
|View full text |Cite
|
Sign up to set email alerts
|

Obstacle avoidance and path planning for humanoid robots using stereo vision

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
81
0
1

Year Published

2006
2006
2021
2021

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 156 publications
(82 citation statements)
references
References 8 publications
0
81
0
1
Order By: Relevance
“…Stereo vision [15,16] is a robust approach for obstacle detection, but is limited by the baseline, because when it narrows, this gives rise to noisy estimates. Furthermore, two cameras limit the use of any compact MAVs.…”
Section: Related Workmentioning
confidence: 99%
“…Stereo vision [15,16] is a robust approach for obstacle detection, but is limited by the baseline, because when it narrows, this gives rise to noisy estimates. Furthermore, two cameras limit the use of any compact MAVs.…”
Section: Related Workmentioning
confidence: 99%
“…To obtain 3D information, it calculates the disparities between the matching points of a stereo image pair. The obtained 3D information can be used in a number of applications, such as automotive vehicles, robot vision, monitoring systems, mobile applications, and 3D reconstruction [1][2][3]. Crucial to the reliability of 3D information in these applications is the accurate calculation of disparity that is carried out by two types of stereo matching algorithms: local and global matching algorithms [4,5].…”
Section: Introductionmentioning
confidence: 99%
“…For visual perception, QRIO is equipped with stereo cameras with a field of view of 47´horizontally and 39´vertically, and a FPGA module for stereo processing. This sub system provides disparity images with 12.5 fps in resolutions of 176x144 and 88x72 pixels [17]. We utilize the lower 88x72 pixel resolution in our system which allows to process all frames in real-time on the robot.…”
Section: Methodsmentioning
confidence: 99%
“…The maps are centered on the robot such that they reflect a 4x4 m area around the robot. When the robot moves, the grids are shifted according to odometry information obtained by the kinematic information from one foot to the other [17]. The robot's x s .…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation