Abstract-In this paper we address the problem of controlling the motion of a group of UAVs bound to keep a formation defined in terms of only relative angles (i.e., a bearing-formation). This problem can naturally arise within the context of several multi-robot applications such as, e.g., exploration, coverage, and surveillance. First, we introduce and thoroughly analyze the concept and properties of bearing-formations, and provide a class of minimally linear sets of bearings sufficient to uniquely define such formations. We then propose a bearing-only formation controller requiring only bearing measurements, converging almost globally, and maintaining bounded inter-agent distances despite the lack of direct metric information.The controller still leaves the possibility to impose group motions tangent to the current bearing-formation. These can be either autonomously chosen by the robots because of any additional task (e.g., exploration), or exploited by an assisting human co-operator. For this latter 'human-in-the-loop' case, we propose a multi-master/multi-slave bilateral shared control system providing the co-operator with some suitable force cues informative of the UAV performance. The proposed theoretical framework is extensively validated by means of simulations and experiments with quadrotor UAVs equipped with onboard cameras. Practical limitations, e.g., limited field-of-view, are also considered.
Abstract-For the control of Unmanned Aerial Vehicles (UAVs) in GPS-denied environments, cameras have been widely exploited as main sensory modality for addressing the UAV state estimation problem. However, the use of visual information for ego-motion estimation presents several theoretical and practical difficulties, such as data association, occlusions, and lack of direct metric information when exploiting monocular cameras. In this paper, we address these issues by considering a quadrotor UAV equipped with an onboard monocular camera and an Inertial Measurement Unit (IMU). First, we propose a robust ego-motion estimation algorithm for recovering the UAV scaled linear velocity and angular velocity from optical flow by exploiting the so-called continuous homography constraint in presence of planar scenes. Then, we address the problem of retrieving the (unknown) metric scale by fusing the visual information with measurements from the onboard IMU. To this end, two different estimation strategies are proposed and critically compared: a first one exploiting the classical Extended Kalman Filter (EKF) formulation, and a second one based on a novel nonlinear estimation framework. The main advantage of the latter scheme lies in the possibility of imposing a desired transient response to the estimation error when the camera moves with a constant acceleration norm w.r.t. the observed plane. We indeed show that, when compared against the EKF on the same trajectory and sensory data, the nonlinear scheme yields considerably superior performance in terms of convergence rate and predictability of the estimation. The paper is then concluded by an extensive experimental validation, including an onboard closed-loop control of a real quadrotor UAV meant to demonstrate the robustness of our approach in real-world conditions.
Abstract-Robot vision became a field of increasing importance in micro aerial vehicle robotics with the availability of small and light hardware. While most approaches rely on external ground stations because of the need of high computational power, we will present a full autonomous setup using only on-board hardware. Our work is based on the continuous homography constraint to recover ego-motion from optical flow. Thus we are able to provide an efficient fall back routine for any kind of UAV (Unmanned Aerial Vehicles) since we rely solely on a monocular camera and on on-board computation. In particular, we devised two variants of the classical continuous 4-point algorithm and provided an extensive experimental evaluation against a known ground truth. The results show that our approach is able to recover the ego-motion of a flying UAV in realistic conditions and by only relying on the limited on-board computational power. Furthermore, we exploited the velocity estimation for closing the loop and controlling the motion of the UAV online.
Abstract-The free and open source Tele-Operation Platform of the MPI for Biological Cybernetics (TeleKyb) is an endto-end software framework for the development of bilateral teleoperation systems between human interfaces (e.g., haptic force feedback devices or gamepads) and groups of quadrotor Unmanned Aerial Vehicles (UAVs). Among drivers for devices and robots from various hardware manufactures, TeleKyb provides a high-level closed-loop robotic controller for mobile robots that can be extended dynamically with modules for state estimation, trajectory planning, processing, and tracking. Since all internal communication is based on the Robot Operating System (ROS), TeleKyb can be easily extended to meet future needs. The capabilities of the overall framework are demonstrated in both an experimental validation of the controller for an individual quadrotor and a complex experimental setup involving bilateral human-robot interaction and shared formation control of multiple UAVs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.