Visual coverage of large-scale environments is a key issue for several, important application scenarios, such as large-scale three-dimensional (3D) reconstruction, search and rescue, and active video surveillance. In this paper, we consider a setting where a group of robots, equipped with standard cameras, must fully cover a surrounding area while minimizing associated movement costs. To address this problem, we use a descriptor for visual coverage that encodes a measure of the visual information contained in an area, based on a regular discretization of the environment in voxels. In particular, our measure of coverage is related to how much of each voxel volume is "penetrated" by the bundle of rays projected from the camera center. Crucially, our descriptor explicitly encodes the viewing direction from which the voxel is observed, which is a key element to measure coverage of visual sensors. Next, we propose an autonomous cooperative exploration approach to control robot movements. Specifically, based on our visual coverage descriptor, we define a utility function to encode the value of future moves for the robots, and we select the next move for a robot by performing a greedy maximization of such utility function. Finally, we empirically evaluate our approach in a simulation scenario based on real visual data and on widely used robotic tools (such as ROS and Stage). Experimental results show that the proposed method outperforms baseline random approaches (i.e., random, semirandom, and uncoordinated) and thus is a valid solution for visual coverage in large-scale outdoor scenarios. C 2015 Wiley Periodicals, Inc.Our approach provides a novel framework for the visual coverage problem. First, the proposed method takes into account several aspects related to the use of standard cameras instead of classical range sensors by using a directional descriptor associated to each voxel of the grid. In practice, the descriptor contains three types of statistics:
Journal of Field Robotics