Warping (Franz et al., Biological Cybernetics 79(3), 191-202, 1998b) and 2D-warping (Möller, Robotics and Autonomous Systems 57(1), [87][88][89][90][91][92][93][94][95][96][97][98][99][100][101] 2009) are effective visual homing methods which can be applied for navigation in topological maps. This paper presents several improvements of 2D-warping and introduces two novel "free" warping methods in the same framework. The free warping methods partially lift the assumption of the original warping method that all landmarks have the same distance from the goal location. Experiments on image databases confirm the effect of the improvements of 2D-warping and show that the two free warping methods produce more precise home vectors and approximately the same proportion of erroneous home vectors. In addition, two novel and easier-to-interpret performance measures for the angular error are introduced.
In the context of vision-based topological navigation, detecting loop closures requires to compare the robot's current camera image to a large number of images stored in the map. For efficient image comparisons, we apply distance functions to global image-descriptors, i.e. low-dimensional descriptors derived from the entire panoramic images. To identify promising combinations of descriptors and distance functions, we formulate the loop-closure detection as a binary classification problem and analyze the resulting receiver operator characteristics (ROC). The results of comparing a wide range of descriptors and distance functions reveal that reliable loopclosure detection is possible with a single 16-to 128-dimensional image-descriptor based on gray-value histograms or Fourier descriptors and that all considered distance functions have a comparable performance. I. MOTIVATIONFor map-building applications of mobile robots it is essential to correctly recognize places which have already been visited. This problem is referred to as loop-closure problem [1], [2], [3]. Correctly detecting loop closures is not only essential for SLAM and other position-based navigation methods (the estimate of the robot's position might drift from the true position) but also for position-less topological navigation methods. Both position-based and position-less navigation strategies have to rely on external sensor cues to detect loop closures. In case loop closures are not detected correctly, the resulting map will get inconsistent, and navigation algorithms can fail.In this paper, we propose a parsimonious appearancebased method to detect loop closures. We assume a topological representation (with or without additional position information) of space which stores panoramic images. Detecting loop closures is then a matter of comparing the robot's current image with all the images stored in the map. For this reason, visual loop-closure detection is closely related to image retrieval (review: [4]). With their full 360 • azimuthal field of view, panoramic images are well suited for visionbased loop-closure detection since images taken at identical positions in space but with different orientations of the robot contain the same visible image information. If operating on cylindrical images obtained from unfolding the original camera image (Fig. III), only the horizontal positions of the imaged features are shifted depending on the robot's orientation (assuming that the robot moves in the plane and rotates around its vertical axis). Instead of comparing images pixel-by-pixel, we transform each cylindrical image into a single descriptor with a low dimensionality and use these descriptors to compute the image dissimilarities. By choosing appropriate image transformations, this approach does not only allow for more efficient image comparisons but also computes descriptors which are invariant against changes of the robot's orientation. Hence, two images taken at identical positions but with different robot orientations can be recognized as identical...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.