Robotic collaboration promises increased robustness and efficiency of missions with great potential in applications, such as search‐and‐rescue and agriculture. Multiagent collaborative simultaneous localization and mapping (SLAM) is right at the core of enabling collaboration, such that each agent can colocalize in and build a map of the workspace. The key challenges at the heart of this problem, however, lie with robust communication, efficient data management, and effective sharing of information among the agents. To this end, here we present CCM‐SLAM, a centralized collaborative SLAM framework for robotic agents, each equipped with a monocular camera, a communication unit, and a small processing board. With each agent able to run visual odometry onboard, CCM‐SLAM ensures their autonomy as individuals, while a central server with potentially bigger computational capacity enables their collaboration by collecting all their experiences, merging and optimizing their maps, or disseminating information back to them, where appropriate. An in‐depth analysis on benchmarking datasets addresses the scalability and the robustness of CCM‐SLAM to information loss and communication delays commonly occurring during real missions. This reveals that in the worst case of communication loss, collaboration is affected, but not the autonomy of the agents. Finally, the practicality of the proposed framework is demonstrated with real flights of three small aircraft equipped with different sensors and computational capabilities onboard and a standard laptop as the server, collaboratively estimating their poses and the scene on the fly.
With systems performing Simultaneous Localization And Mapping (SLAM) from a single robot reaching considerable maturity, the possibility of employing a team of robots to collaboratively perform a task has been attracting increasing interest. Promising great impact in a plethora of tasks ranging from industrial inspection to digitization of archaeological structures, collaborative scene perception and mapping are key in efficient and effective estimation. In this paper, we propose a novel, centralized architecture for collaborative monocular SLAM employing multiple small Unmanned Aerial Vehicles (UAVs) to act as agents. Each agent is able to independently explore the environment running limited-memory SLAM onboard, while sending all collected information to a central server, a ground station with increased computational resources. The server manages the maps of all agents, triggering loop closure, map fusion, optimization and distribution of information back to the agents. This allows an agent to incorporate observations from others in its SLAM estimates on the fly. We put the proposed framework to the test employing a nominal keyframe-based monocular SLAM algorithm, demonstrating the applicability of this system in multi-UAV scenarios.
With robotic perception constituting the biggest impediment before robots are ready for employment in real missions, the promise of more efficient and robust robotic perception in multi-agent, collaborative missions can have a great impact many robotic applications. Employing a ubiquitous and well-established visual-inertial setup onboard each agent, in this paper we propose CVI-SLAM, a novel visual-inertial framework for centralized collaborative SLAM. Sharing all information with a central server, each agent outsources computationally expensive tasks, such as global map optimization to relieve onboard resources and passes on measurements to other participating agents, while running visual-inertial odometry onboard to ensure autonomy throughout the mission. Thoroughly analyzing CVI-SLAM, we attest to its accuracy and the improvements arising from collaboration, and evaluate its scalability in the number of participating agents and applicability in terms of network requirements.
Figure 1: Two-dimensional descriptors obtained with our mixed-context loss approach (center) in comparison with a Siamese (left) and a triplet loss (right) evaluated on the MNIST test set. A normal distribution has been fit to each cluster and its confidence ellipses plotted. The triplet network shows clear signs of the localized-context problem (Section 3.2), resulting in inconsistently scaled descriptors. While the Siamese loss does not show this effect, it does not properly take advantage of context and therefore still learns rather poor descriptors. Our mixed-context loss (Section 4.1) in contrast shows neither problem, yielding consistently scaled descriptors and the lowest false positive rate at 95% recall (FPR95) and best PR curves.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.