To facilitate accurate tracking in unknown/uncertain environments, this paper proposes a simple learning (SL) strategy for feedback linearization control (FLC) of aerial robots subject to uncertainties. The SL strategy minimizes a cost function defined based on the closed-loop error dynamics of the nominal system via the gradient descent technique to find the adaptation rules for feedback controller gains and disturbance estimate in the feedback control law. In addition to the derivation of the SL adaptation rules, the closedloop stability for a second-order uncertain nonlinear system is proven in this paper. Moreover, it is shown that the SL strategy can find the global optimum point, while the controller gains and disturbance estimate converge to a finite value which implies a bounded control action in the steady-state. Furthermore, utilizing a simulation study, it is shown that the simple learning-based FLC (SL-FLC) framework can ensure desired closed-loop error dynamics in the presence of disturbances and modeling uncertainties. Finally, to validate the SL-FLC framework in real-time, the trajectory tracking problem of a tilt-rotor tricopter unmanned aerial vehicle under uncertain conditions is studied via three case scenarios, wherein the disturbances in the form of mass variation, ground effect, and wind gust, are induced. The real-time results illustrate that the SL-FLC framework facilitates a better tracking performance than that of the traditional FLC method while maintaining the nominal control performance in the absence of modeling uncertainties and external disturbances, and exhibiting robust control performance in the presence of modeling uncertainties and external disturbances. INDEX TERMS Feedback linearization control, nonlinear system, uncertain systems, learning control, unmanned aerial vehicle.
One of the major challenges of model predictive control (MPC) for robotic applications is the non-trivial weight tuning process while crafting the objective function. This process is often executed using the trial-and-error method by the user. Consequently, the optimality of the weights and the time required for the process become highly dependent on the skill set and experience of the user. In this study, we present a generic and user-independent framework which automates the tuning process by reinforcement learning. The proposed method shows competency in tuning a nonlinear MPC (NMPC) which is employed for trajectory tracking control of aerial robots. It explores the desirable weights within less than an hour in iterative Gazebo simulations running on a standard desktop computer. The real world experiments illustrate that the NMPC weights explored by the proposed method result in a satisfactory trajectory tracking performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.