Place recognition is a key capability for navigating robots. While significant advances have been achieved on large, stable platforms such as robot cars, achieving robust performance on rapidly manoeuvring platforms in outdoor natural conditions remains a challenge, with few systems able to deal with both variable conditions and large tilt variations caused by rough terrain. Taking inspiration from biology, we propose a novel combination of sensory modality and image processing to obtain a significant improvement in the robustness of sequence-based image matching for place recognition. We use a UV-sensitive fisheye lens camera to segment sky from ground, providing illumination invariance, and encode the resulting binary images using spherical harmonics to enable rotationinvariant image matching. In combination, these methods also produce substantial pitch and roll invariance, as the spherical harmonics for the sky shape are minimally affected, providing the sky remains visible. We evaluate the performance of our method against a leading appearance-invariant technique (SeqSLAM) and a leading viewpoint-invariant technique (FAB-MAP 2.0) on three new outdoor datasets encompassing variable robot heading, tilt, and lighting conditions in both forested and urban environments. The system demonstrates improved condition-and tilt-invariance, enabling robust place recognition during aggressive zigzag manoeuvring along bumpy trails and at tilt angles of up to 60 degrees.
Evidence from behavioral experiments suggests that insects use the skyline as a cue for visual navigation. However, changes of lighting conditions, over hours, days or possibly seasons, significantly affect the appearance of the sky and ground objects. One possible solution to this problem is to extract the “skyline” by an illumination-invariant classification of the environment into two classes, ground objects and sky. In a previous study (Insect models of illumination-invariant skyline extraction from UV (ultraviolet) and green channels), we examined the idea of using two different color channels available for many insects (UV and green) to perform this segmentation. We found out that for suburban scenes in temperate zones, where the skyline is dominated by trees and artificial objects like houses, a “local” UV segmentation with adaptive thresholds applied to individual images leads to the most reliable classification. Furthermore, a “global” segmentation with fixed thresholds (trained on an image dataset recorded over several days) using UV-only information is only slightly worse compared to using both the UV and green channel. In this study, we address three issues: First, to enhance the limited range of environments covered by the dataset collected in the previous study, we gathered additional data samples of skylines consisting of minerals (stones, sand, earth) as ground objects. We could show that also for mineral-rich environments, UV-only segmentation achieves a quality comparable to multi-spectral (UV and green) segmentation. Second, we collected a wide variety of ground objects to examine their spectral characteristics under different lighting conditions. On the one hand, we found that the special case of diffusely-illuminated minerals increases the difficulty to reliably separate ground objects from the sky. On the other hand, the spectral characteristics of this collection of ground objects covers well with the data collected in the skyline databases, increasing, due to the increased variety of ground objects, the validity of our findings for novel environments. Third, we collected omnidirectional images, as often used for visual navigation tasks, of skylines using an UV-reflective hyperbolic mirror. We could show that “local” separation techniques can be adapted to the use of panoramic images by splitting the image into segments and finding individual thresholds for each segment. Contrarily, this is not possible for ‘global’ separation techniques.
Navigation in cluttered environments is an important challenge for animals and robots alike and has been the subject of many studies trying to explain and mimic animal navigational abilities. However, the question of selecting an appropriate home location has, so far, received only little attention. This is surprising, since the choice of a home location might greatly influence an animal’s navigation performance. To address the question of home choice in cluttered environments, a systematic analysis of homing trajectories was performed by computer simulations using a skyline-based local homing method. Our analysis reveals that homing performance strongly depends on the location of the home in the environment. Furthermore, it appears that by assessing homing success in the immediate vicinity of the home, an animal might be able to predict its overall success in returning to it from within a much larger area.
Inspired by the learning walks of the ant Ocymyrmex robustior, the original multi-snapshot model was introduced, which—in contrast to the classical “single snapshot at the goal” model—collects multiple snapshots in the vicinity of the goal location that subsequently can be used for homing, that is, for guiding the return to the goal. In this study, we show that the multi-snapshot model can be generalized to homing in three dimensions. In addition to capturing snapshots at positions shifted in all three dimensions, we suggest to decouple the home direction from the orientation of snapshots and to associate a home vector with each snapshot. We then propose a modification of the multi-snapshot model for three-dimensional route following and evaluate its performance in an accurate reconstruction of a real environment. As an illumination-invariant alternative to grayscale images, we also examine sky-segmented images. We use spherical harmonics as efficient representation of panoramic images enabling low memory usage and fast similarity estimation of rotated images. The results show that our approach can steer an agent reliably along a route, making it also suitable for robotic applications using on-board computers with limited resources.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.