Combining control engineering with nonparametric modeling techniques from machine learning allows to control systems without analytic description using data-driven models. Most existing approaches separate learning, i.e. the system identification based on a fixed dataset, and control, i.e. the execution of the model-based control law. This separation makes the performance highly sensitive to the initial selection of training data and possibly requires very large datasets. This article proposes a learning feedback linearizing control law using online closed-loop identification. The employed Gaussian process model updates its training data only if the model uncertainty becomes too large. This event-triggered online learning ensures high data efficiency and thereby reduces the computational complexity, which is a major barrier for using Gaussian processes under realtime constraints. We propose safe forgetting strategies of data points to adhere to budget constraint and to further increase data-efficiency. We show asymptotic stability for the tracking error under the proposed event-triggering law and illustrate the effective identification and control in simulation.
Abstract-Data-driven approaches in control allow for identification of highly complex dynamical systems with minimal prior knowledge. However, properly incorporating model uncertainty in the design of a stabilizing control law remains challenging. Therefore, this article proposes a control Lyapunov function framework which semiglobally asymptotically stabilizes a partially unknown fully actuated control affine system with high probability. We propose an uncertainty-based control Lyapunov function which utilizes the model fidelity estimate of a Gaussian process model to drive the system in areas near training data with low uncertainty. We show that this behavior maximizes the probability that the system is stabilized in the presence of power constraints using equivalence to dynamic programming. A simulation on a nonlinear system is provided.
Abstract-Data-driven approaches from machine learning provide powerful tools to identify dynamical systems with limited prior knowledge of the model structure. This paper utilizes Gaussian processes, a Bayesian nonparametric approach, to learn a model for feedback linearization. By using a proper kernel structure, the training data for identification is collected while an existing controller runs the system. Using the identified dynamics, an improved controller, based on feedback linearization, is proposed. The analysis shows that the resulting system is globally uniformly ultimately bounded. We further derive a relationship between the training data of the system and the size of the ultimate bound to which the system converges with a certain probability. A simulation of a robotic system illustrates the proposed method.
High performance tracking control can only be achieved if a good model of the dynamics is available. However, such a model is often difficult to obtain from first order physics only. In this paper, we develop a data-driven control law that ensures closed loop stability of Lagrangian systems. For this purpose, we use Gaussian Process regression for the feedforward compensation of the unknown dynamics of the system. The gains of the feedback part are adapted based on the uncertainty of the learned model. Thus, the feedback gains are kept low as long as the learned model describes the true system sufficiently precisely. We show how to select a suitable gain adaption law that incorporates the uncertainty of the model to guarantee a globally bounded tracking error. A simulation with a robot manipulator demonstrates the efficacy of the proposed control law.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.