This paper presents a novel self-localization technique for mobile robots based on image feature matching from omnidirectional vision. The proposed method first constructs a virtual space with synthetic omnidirectional imaging to simulate a mobile robot equipped with an omnidirectional vision system in the real world. In the virtual space, a number of vertical and horizontal lines are generated according to the structure of the environment. They are imaged by the virtual omnidirectional camera using the catadioptric projection model. The omnidirectional images derived from the virtual and real environments are then used to match the synthetic lines and real scene edges. Finally, the pose and trajectory of the mobile robot in the real world are estimated by the efficient perspective-n-point (EPnP) algorithm based on the line feature matching. In our experiments, the effectiveness of the proposed self-localization technique was validated by the navigation of a mobile robot in a real world environment.
In this paper, a system is designed for improving the quality of novel view synthesis. To make the virtual view synthesis look better, the basic idea is to perform model refinement with important camera information of the novel view. In this system, we first use the reconstructed visual hull from shape from silhouette and then refine this 3D model based on the view dependency. The 3D points of the model are classified into the outline points and the non-outline points according to the virtual viewpoint. To refine the model, both of the outline points and the non-outline points are used to move the 3D points iteratively by minimising energy function until convergence. The key energy is the photo-consistency energy with additional smoothness energy and contour/visual hull energy. The latter two energy terms can avoid the local minimum when calculating the photo-consistency energy. Finally, we render the novel view image using view-dependent image synthesis by blending the pixel values from reference cameras near the virtual camera.
This paper presents geometric techniques for selflocalization improvement, especially for the robots equipped with a single catadioptric camera. We take the vertical line and intersection point matching into account, and proposed a novel descriptor named "Double-Gaussian vector". The vector uses two Gaussian matrices to blur the process image region and build the corresponding feature vectors for solving the vertical line matching in two consecutive video frames. For ground plane estimation, the perpendicular lines with respect to optical axis are extracted by two approximate curve equations. The equations then crop the ground plane area of the omnidirectional image. The sparse bundle adjustment (SBA) is adopted for iterative calculating the 3D matching points between two robot locations for optimizing the robot pose estimation. The convergent 3D points are used to compute the robot poses and record the navigation trajectory. The results show that the proposed methods significantly improve the robot localization and navigation compared to the previous literature in the experiments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.