Autonomous flight with unmanned aerial vehicles (UAVs) nowadays depends on the availability and reliability of Global Navigation Satellites Systems (GNSS). In cluttered outdoor scenarios, such as narrow gorges, or near tall artificial structures, such as bridges or dams, reduced sky visibility and multipath effects compromise the quality and the trustworthiness of the GNSS position fixes, making autonomous, or even manual, flight difficult and dangerous. To overcome this problem, cooperative navigation has been proposed: a second UAV flies away from any occluding objects and in line of sight from the first and provides the latter with positioning information, removing the need for full and reliable GNSS coverage in the area of interest. In this work we use high-power light-emitting diodes (LEDs) to signalize the second drone and we present a computer vision pipeline that allows to track the second drone in real-time from a distance up to 100 m and to compute its relative position with decimeter accuracy. This is based on an extension to the classical iterative algorithm for the Perspective-n-Points problem in which the photometric error is minimized according to a image formation model. This extension allow to substantially increase the accuracy of point-feature measurements in image space (up to 0.05 pixels), which directly translates into higher positioning accuracy with respect to conventional methods.
For connected vehicles to have a substantial effect on road safety, it is required that accurate positions and trajectories can be shared. To this end, all vehicles must be accurately geo-localized in a common frame. This can be achieved by merging GNSS (Global Navigation Satellite System) information and visual observations matched with a map of geopositioned landmarks. Building such a map remains a challenge, and current solutions are facing strong cost-related limitations.We present a collaborative framework for high-definition mapping, in which vehicles equipped with standard sensors, such as a GNSS receiver and a mono-visual camera, update a map of geo-localized landmarks. Our system is composed of two processing blocks: the first one is embedded in each vehicle, and aims at geo-localizing the vehicle and the detected feature marks. The second is operated on cloud servers, and uses observations from all the vehicles to compute updates for the map of geo-positioned landmarks. As the map's landmarks are detected and positioned by more and more vehicles, the accuracy of the map increases, eventually converging in probability towards a null error. The landmarks' geo-positions are estimated in a stable and scalable way, enabling to provide dynamic map updates in an automatic manner.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.