Solving the decision-making problem between pursuing the objective of covering distance and exploiting thermal updrafts is the central challenge in cross-country soaring flight. The need for trading short-term rewarding actions against actions that pay off in the long term makes for a hard-to-solve problem. Policies resulting from reinforcement learning offer the potential to handle long-term correlations between actions taken and rewards received. The paper presents a reinforcement learning setup, which results in a control strategy for the autonomous soaring sample application of GPS Triangle racing. First, we frame the problem in terms of a Markov decision process. In particular, we present a straightforward model for the three-degrees-of-freedom system dynamics of a glider aircraft that does not make any simplifying assumptions regarding the wind field or the relative aircraft velocity. The competition task is decomposed into subtasks, then. Stochastic gradient ascent solves the associated hierarchical reinforcement learning problem without the designer employing any further, potentially deficient heuristics. We present an implementation of the overall policy alongside an updraft estimator on embedded hardware aboard an unpiloted glider aircraft. Flight-test results validate the successful transfer of the hierarchical control policy trained in simulation to real-world autonomous cross-country soaring.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.