Neural network-based model reference adaptive control (MRAC) is an effective architecture used in the flight control community to combat significant uncertainties where the structure of the uncertainty is unknown. In our previous work, a novel adaptive control architecture called sparse neural network (SNN) was developed in order to improve long-term learning and transient performance of flight vehicles with persistent uncertainties which lie in various regions throughout the operating regime. The SNN is designed to operate with small learning rates in order to avoid high-frequency oscillations and utilizes only a small number of active neurons in the adaptive controller in order to reduce the computational burden on the processor. In this paper, we enhance the SNN architecture by developing an innovative adaptive control term which is used to mitigate a control effectiveness matrix. Furthermore, we design a robust control term and a strict dwell time condition in order to ensure stability while switching between segments. We demonstrate the effectiveness of the SNN approach by controlling a sophisticated hypersonic vehicle with flexible body effects.
Reinforcement learning has been established over the past decade as an effective tool to find optimal control policies for dynamical systems, with recent focus on approaches that guarantee safety during the learning and/or execution phases. In general, safety guarantees are critical in reinforcement learning when the system is safety-critical and/or task restarts are not practically feasible. In optimal control theory, safety requirements are often expressed in terms of state and/or control constraints. In recent years, reinforcement learning approaches that rely on persistent excitation have been combined with a barrier transformation to learn the optimal control policies under state constraints. To soften the excitation requirements, model-based reinforcement learning methods that rely on exact model knowledge have also been integrated with the barrier transformation framework. The objective of this paper is to develop safe reinforcement learning method for deterministic nonlinear systems, with parametric uncertainties in the model, to learn approximate constrained optimal policies without relying on stringent excitation conditions. To that end, a model-based reinforcement learning technique that utilizes a novel filtered concurrent learning method, along with a barrier transformation, is developed in this paper to realize simultaneous learning of unknown model parameters and approximate optimal state-constrained control policies for safety-critical systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.