Robotic collaboration promises increased robustness and efficiency of missions with great potential in applications, such as search‐and‐rescue and agriculture. Multiagent collaborative simultaneous localization and mapping (SLAM) is right at the core of enabling collaboration, such that each agent can colocalize in and build a map of the workspace. The key challenges at the heart of this problem, however, lie with robust communication, efficient data management, and effective sharing of information among the agents. To this end, here we present CCM‐SLAM, a centralized collaborative SLAM framework for robotic agents, each equipped with a monocular camera, a communication unit, and a small processing board. With each agent able to run visual odometry onboard, CCM‐SLAM ensures their autonomy as individuals, while a central server with potentially bigger computational capacity enables their collaboration by collecting all their experiences, merging and optimizing their maps, or disseminating information back to them, where appropriate. An in‐depth analysis on benchmarking datasets addresses the scalability and the robustness of CCM‐SLAM to information loss and communication delays commonly occurring during real missions. This reveals that in the worst case of communication loss, collaboration is affected, but not the autonomy of the agents. Finally, the practicality of the proposed framework is demonstrated with real flights of three small aircraft equipped with different sensors and computational capabilities onboard and a standard laptop as the server, collaboratively estimating their poses and the scene on the fly.