This article analyzes the evolution and current trends in aerial robotic manipulation, comprising helicopters, conventional underactuated multirotors, and multidirectional thrust platforms equipped with a wide variety of robotic manipulators capable of physically interacting with the environment. It also covers cooperative aerial manipulation and interconnected actuated multibody designs. The review is completed with developments in teleoperation, perception, and planning. Finally, a new generation of aerial robotic manipulators is presented with our vision of the future. Index Terms-Aerial manipulation, aerial robots physically interacting with the environment, unmanned aerial vehicles.
I. INTRODUCTIONT HE field of aerial robots physically interacting with the environment, and particularly aerial robotic manipulators (AEROMs), has experienced ten years of sustained growth. Diverse prototypes, functionalities and capabilities have been developed and evaluated in representative indoor and outdoor scenarios, demonstrating the possibility to successfully perform manipulation tasks while flying. The ability of aerial manipulators to quickly reach and operate in high altitude workspaces, along with the level of maturity reached in recent years, led to the application of this technology in areas like inspection and maintenance, reducing time, cost, and risk for the human workers. In this sense, this article aims at providing a broad perspective and analysis of the work done in aerial manipulation,
Estimating the precise location of a camera using visual localization enables interesting applications such as augmented reality or robot navigation. This is particularly useful in indoor environments where other localization technologies, such as GNSS, fail. Indoor spaces impose interesting challenges on visual localization algorithms: occlusions due to people, textureless surfaces, large viewpoint changes, low light, repetitive textures, etc. Existing indoor datasets are either comparably small or do only cover a subset of the mentioned challenges. In this paper, we introduce 5 new indoor datasets for visual localization in challenging real-world environments. They were captured in a large shopping mall and a large metro station in Seoul, South Korea, using a dedicated mapping platform consisting of 10 cameras and 2 laser scanners. In order to obtain accurate ground truth camera poses, we developed a robust LiDAR SLAM which provides initial poses that are then refined using a novel structure-from-motion based optimization. We present a benchmark of modern visual localization algorithms on these challenging datasets showing superior performance of structure-based methods using robust image features. The datasets are available at: https://naverlabs.com/datasets
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.