Visual inertial odometry (VIO) is a technique to estimate the change of a mobile platform in position and orientation overtime using the measurements from on-board cameras and IMU sensor. Recently, VIO attracts significant attentions from large number of researchers and is gaining the popularity in various potential applications due to the miniaturisation in size and low cost in price of two sensing modularities. However, it is very challenging in both of technical development and engineering implementation when accuracy, real-time performance, robustness and operation scale are taken into consideration. This survey is to report the state of the art VIO techniques from the perspectives of filtering and optimisation-based approaches, which are two dominated approaches adopted in the research area. To do so, various representations of 3D rigid motion body are illustrated. Then filtering-based approaches are reviewed, and followed by optimisation-based approaches. The links between these two approaches will be clarified via a framework of the Bayesian Maximum A Posterior. Other features, such as observability and self calibration, will be discussed.
This paper presents an indoor relocalization system using a dual-stream Convolutional Neural Network (CNN) with both color images and depth images as the network inputs. Aiming at the pose regression problem, a deep neural network architecture for RGB-D images is introduced, a training method by stages for the dual-stream CNN is presented, different depth image encoding methods are discussed and a novel encoding method is proposed. By introducing the range information into the network through a dual-stream architecture, we not only improved the relocalization accuracy by about 20% compared with the state-of-the-art deep learning method for pose regression, but also greatly enhance the system robustness in challenging scenes such as large scale, dynamic, fast movement and nighttime environments. To the best of our knowledge, this is the first work to solve the indoor relocalization problems based on deep CNNs with RGB-D camera. The method is first evaluated on the Microsoft 7-Scenes dataset to show its advantage in accuracy compared with other CNNs. Large scale indoor relocalization is further presented using our method. The experimental results show that 0.3m in position and 4 • in orientation accuracy could be obtained. Finally, this method is evaluated on challenging indoor datasets collected from motion capture system. The results show that the relocalization performance is hardly affected by dynamic objects, motion blur or night-time environments. Note to Practitioners-This work was motivated by the limitations of the existing indoor relocalization technology that is significant for mobile robot navigation. By using this technology, robots can infer where they are in a previously visited place. Previous visual localization methods can hardly be put into wide application for the reason that they have strict requirements for the environments. When faced with challenging scenes such as large scale environments, dynamic objects, motion blur caused by fast movement, night-time environments or other appearance changed scenes, most existing methods tend to fail. This paper introduces deep learning into the indoor relocalization problem, uses dual-stream CNN (depth stream and color stream) to realize 6-DOF pose regression in an end-to-end manner. The localization error is about 0.3m and 4 • in a large scale indoor environments. And what is more important, the proposed system does not lose efficiency in some challenging scenes. The proposed encoding method of depth images can also be adopted in other Deep Neural Networks with RGB-D cameras as the sensor.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.