Building on the maturity of single-robot SLAM algorithms, collaborative SLAM has brought significant gains in terms of efficiency and robustness, but has also raised new challenges to cope with like informational, network and resource constraints. Several multi-robot frameworks have been coined for visual SLAM, ranging from highly-integrated and fully-centralized architectures to fully distributed and decentralized methods. However, many proposed architectures compromise the autonomy of the robots in fusing the data processed by the other agents to enhance their own estimation accuracy. In this paper, we propose three methods to share visual-inertial information, based on rigid, condensed and pruned visual-inertial packets. We also propose a common collaborative SLAM architecture to organize the computation, exchange and integration of such packets. We evaluated those methods on the EuRoC [1] dataset and on our custom dataset AirMuseum [2]. Experiments showed that the proposed methods allow the agents to build, exchange and integrate consistent visual-inertial packets, and improve their trajectory estimation accuracy up to several centimeters.