Indoor pedestrian navigation systems are increasingly needed in various types of applications. However, such systems are still face many challenges. In addition to being accurate, a pedestrian positioning system must be mobile, cheap, and lightweight. Many approaches have been explored. In this paper, we take the advantage of sensors integrated in a smartphone and their capabilities to develop and compare two low-cost, hands-free, and handheld indoor navigation systems. The first one relies on embedded vision (smartphone camera), while the second option is based on low-cost smartphone inertial sensors (magnetometer, accelerometer, and gyroscope) to provide a relative position of the pedestrian. The two associated algorithms are computationally lightweight, since their implementations take into account the restricted resources of the smartphone. In the experiment conducted, we evaluate and compare the accuracy and repeatability of the two positioning methods for different indoor paths. The results obtained demonstrate that the visionbased localization system outperforms the inertial sensor-based positioning system.
Abstract-This paper proposes an algorithm pipeline for estimating the camera orientation based on vanishing points computation targeting pedestrian navigation assistance in Manhattan World. Inspired from some of published methods, the proposed pipeline introduces a novel sampling strategy among finite and infinite vanishing points and a tracking along a video sequence to enforce the robustness by extracting the three most pertinent orthogonal directions while preserving a short processing time for real-time application. Experiments on real images and video sequences show that the proposed heuristic strategy for selecting orthogonal vanishing points is pertinent as our algorithm gives better results than the recently published RNS optimal method [16], in particular for the yaw angle, which is actually essential for navigation task.
In this paper, we propose to use visual saliency to improve an indoor localization system based on image matching. A learning step permits to determinate the reference trajectory by selecting some key frames along the path. During the localization step, the current image is then compared to the obtained key frames in order to estimate the user's position. This comparison is realized by extracting primitive information through a saliency method, which aims to improve our localization system by focusing our attention on the more singular regions to match. Another advantage of the saliency-guided detection is to save computation time. The proposed framework has been developed and tested on a Smartphone. The obtained results show the interest of the use of saliency models by comparing the numbers of features and good matches in video sequence.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.