This paper focuses on the idea of energy efficient cooperative collision avoidance between two quadcopters. Two strategies for reciprocal online collision-avoiding actions (i.e., coherent maneuvers without requiring any real-time consensus) are proposed. In the first strategy, UAVs change their speed, while in the second strategy they change their heading to avoid a collision. The avoidance actions are parameterized in terms of the time difference between detecting the collision and starting the maneuver and the amount of speed/heading change. These action parameters are used to generate intermediate way-points, subsequently translated into a minimum snap trajectory, to be executed by a PD controller. For realism, the relative pose of the other UAV, estimated by each UAV (at the point of detection), is considered to be uncertain — thereby presenting substantial challenges to undertaking reciprocal actions. Performing supervised learning based on optimization derived labels (as done in prior work) becomes computationally burden-some under these uncertainties. Instead, an (unsupervised) neuroevolution algorithm, called AGENT, is employed to learn a neural network (NN) model that takes the initial (uncertain) pose as state inputs and maps it to a robust optimal action. In neuroevolution, the NN topology and weights are simultaneously optimized using a special evolutionary process, where the fitness of candidate NNs are evaluated over a set of sample (in this case, various collision) scenarios. For further computational tractability, a surrogate model is used to estimate the energy consumption and a classifier is used to identify trajectories where the controller fails. The trained neural network shows encouraging performance for collision avoidance over a large variety of unseen scenarios.
Cooperative autonomous approaches to avoiding collisions among small Unmanned Aerial Vehicles (UAVs) is central to safe integration of UAVs within the civilian airspace. One potential online cooperative approach is the concept of reciprocal actions, where both UAVs take pre-trained mutually coherent actions that do not require active online coordination (thereby avoiding the computational burden and risk associated with it). This paper presents a learning based approach to train such reciprocal maneuvers. Neuroevolution, which uses evolutionary algorithms to simultaneously optimize the topology and weights of neural networks, is used as the learning method -which operates over a set of sample approach scenarios. Unlike most existing work (that minimize travel distance, energy or risk), the training process here focuses on the objective of minimizing the required detection range; this has important practical implications w.r.t. alleviating the dependency on sophisticated sensing and their reliability under various environments. A specialized design of experiments and line search is used to identify the minimum detection range for each sample scenarios. In order to allow an efficient training process, a classifier is used to discard actions (without simulating them) where the controller would fail. The model obtained via neuroevolution is observed to generalize well to (i.e., successful collision avoidance over) unseen approach scenarios.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.