This paper presents a decentralized method for aligning three robots without communication and without explicit localization. Each robot is assumed to be equipped with an omnidirectional camera and is assumed to be able to detect the two other robots in the images produced by its own camera. Each robot decides of its own movements based on the continuous measures of angles between itself and the others. We present both the algorithm that performs such a task and its theoretical analysis. We prove the correctness of the algorithm when the robots are reduced to points, when collisions are not considered and when the initial formation is not an equilateral triangle. When collisions are considered, the probability for the algorithm to be successful is greater than 1 − 1 36. We then present some simulation for which a security distance d sec has been introduced. During the execution of the algorithm by each robot, if the distance between any two robots is lower than this security distance we suppose that a collision is likely to occur and consider that the Algorithm has failed. Simulation results report that the larger d sec , the bigger the percentage of failures. However runs are always successful when d sec = 0 which suggests that the theoretical bound is not tight and could be improved. In addition, simulations reveal that the main source of failures was not the one expected by the theoretical analysis.
The main contribution of this paper is the design of a decentralized and tuning-less high level controller able to maintain without tracking errors a Leader-Follower (LF) configuration in case of lack or degraded communications (latencies, loss…) between the leader and followers UAVs. The high level controller only requires simple tunings and rests on a predictive filtering algorithm and a first order dynamic model to recover an estimation of the leader UAV velocities and avoid the tracking errors.
There are numerous advantages of flying in group over using single robot in mission execution. However this implies solving a crucial issue: the coordination between drones. Moreover, according to the targeted application, it may be necessary or desirable that drones fly following a given geometric shape (line, diamond, etc.), a problem known as formation control. Building and maintaining a spatial geometric shape while evolving within the environment usually requires extensive communications between the robots for coordinating their movements. In this work we focus on the use of an Image-Based Visual Servoing (IBVS) technique for building and maintaining a Leader-Follower (LF) configuration of multi aerial vehicles (UAVs) without communication. While most IBVS techniques either require rigor camera calibration or can not regulate the error according to the three robot axes, our approach avoids the calibration phase by relying on image moments features to provide a vision-based predictive compensation method. The follower robot's solution works in GNSS-denied conditions and can run using only on-board sensors. The method is validated through simulations for a group of three quadrotors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.