No abstract
Abstract-In this paper, we consider the problem of building 3D models of complex staircases based on laser range data acquired with a humanoid. These models have to be sufficiently accurate to enable the robot to reliably climb up the staircase. We evaluate two state-of-the-art approaches to plane segmentation for humanoid navigation given 3D range data about the environment. The first approach initially extracts line segments from neighboring 2D scan lines, which are successively combined if they lie on the same plane. The second approach estimates the main directions in the environment by randomly sampling points and applying a clustering technique afterwards to find planes orthogonal to the main directions. We propose extensions for this basic approach to increase the robustness in complex environments which may contain a large number of different planes and clutter. In practical experiments, we thoroughly evaluate all methods using data acquired with a laser-equipped Nao robot in a multi-level environment. As the experimental results show, the reconstructed 3D models can be used to autonomously climb up complex staircases.
Abstract-Reliable and efficient navigation with a humanoid robot is a difficult task. First, the motion commands are executed rather inaccurately due to backlash in the joints or foot slippage. Second, the observations are typically highly affected by noise due to the shaking behavior of the robot. Thus, the localization performance is typically reduced while the robot moves and the uncertainty about its pose increases. As a result, the reliable and efficient execution of a navigation task cannot be ensured anymore since the robot's pose estimate might not correspond to the true location. In this paper, we present a reinforcement learning approach to select appropriate navigation actions for a humanoid robot equipped with a camera for localization. The robot learns to reach the destination reliably and as fast as possible, thereby choosing actions to account for motion drift and trading off velocity in terms of fast walking movements against accuracy in localization. We present extensive simulated and practical experiments with a humanoid robot and demonstrate that our learned policy significantly outperforms a hand-optimized navigation strategy.
Abstract-In order to successfully climb challenging staircases that consist of many steps and contain difficult parts, humanoid robots need to accurately determine their pose. In this paper, we present an approach that fuses the robot's observations from a 2D laser scanner, a monocular camera, an inertial measurement unit, and joint encoders in order to localize the robot within a given 3D model of the environment. We develop an extension to standard Monte Carlo localization (MCL) that draws particles from an improved proposal distribution to obtain highly accurate pose estimates. Furthermore, we introduce a new observation model based on chamfer matching between edges in camera images and the environment model. We thoroughly evaluate our localization approach and compare it to previous techniques in real-world experiments with a Nao humanoid. The results show that our approach significantly improves the localization accuracy and leads to a considerably more robust robot behavior. Our improved proposal in combination with chamfer matching can be generally applied to improve a range-based pose estimate by a consistent matching of lines obtained from vision.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with đŸ’™ for researchers
Part of the Research Solutions Family.