In mobile visual sensor networks, relative pose (location and orientation) estimation is a prerequisite to accomplish a wide range of collaborative tasks. In this paper, we present a distributed, peer-to-peer algorithm for relative pose estimation in a network of mobile robots equipped with RGB-D cameras acting as a visual sensor network.Our algorithm uses the depth information to estimate the relative pose of a robot when camera sensors mounted on different robots observe a common scene from different angles of view. To create the algorithm, we first developed a framework based on the beam-based sensor model to eliminate the adverse effects of the situations where two views of a scene each are partially seen by the sensors. Then, in order to cancel the bias introduced by the beam-based sensor model, we developed a scheme that allows the algorithm to symmetrize across the two views.We conducted simulations and also implemented the algorithm on our mobile visual sensor network testbed. Both the simulations and experimental results indicate that the proposed algorithm is fast enough for real-time operation and able to maintain a high estimation accuracy. To our knowledge, it is the first distributed relative pose estimation algorithm that uses the depth information captured by multiple RGB-D cameras.
I. INTRODUCTIONThe latest advances in video technology, inexpensive camera sensors, and distributed processing allow the wide utilization of image sensors. It has resulted in a new paradigm-visual sensor network [1]. Visual sensor networks observe and process image/video data give rich information on situation awareness. Replacing the conventional RGB cameras with RGB-D camera sensors (e.g., Microsoft Kinect [2]) which can capture color image along with per-pixel depth information, visual sensor networks promise a wider range of the innovative applications, such as 3D reconstruction, object localization, etc.In this paper, we consider mobile visual sensor networks of robots equipped with RGB-D camera sensors to observe the environment. We treat each robot as a mobile RGB-D sensor. The goal is to enable each mobile RGB-D sensor to obtain the precise location and orientation information of other sensors. In order to achieve this goal, we present a peer-topeer, distributed depth image registration algorithm estimating the relative pose between multiple sensors when two or more sensors observe a common scene from different angles.Many research works have been proposed to determine the pose of a single RGB-D sensor in the last few years. The most