In this paper, an event-triggered near optimal control structure is developed for nonlinear continuous-time systems with control constraints. Due to the saturating actuators, a nonquadratic cost function is introduced and the Hamilton-Jacobi-Bellman (HJB) equation for constrained nonlinear continuous-time systems is formulated. In order to solve the HJB equation, an actor-critic framework is presented. The critic network is used to approximate the cost function and the action network is used to estimate the optimal control law. In addition, in the proposed method, the control signal is transmitted in an aperiodic manner to reduce the computational and the transmission cost. Both the networks are only updated at the trigger instants decided by the event-triggered condition. Detailed Lyapunov analysis is provided to guarantee that the closed-loop event-triggered system is ultimately bounded. Three case studies are used to demonstrate the effectiveness of the proposed method.
This paper proposes a novel event-triggered adaptive dynamic programming (ADP) control method for nonlinear continuous-time system with unknown internal states. Comparing with the traditional ADP design with a fixed sample period, the event-triggered method samples the state and updates the controller only when it is necessary. Therefore, the computation cost and transmission load are reduced. Usually, the event-triggered method is based on the system entire state which is either infeasible or very difficult to obtain in practice applications. This paper integrates a neural-network-based observer to recover the system internal states from the measurable feedback. Both the proposed observer and the controller are aperiodically updated according to the designed triggering condition. Neural network techniques are applied to estimate the performance index and help calculate the control action. The stability analysis of the proposed method is also demonstrated by Lyapunov construct for both the continuous and jump dynamics. The simulation results verify the theoretical analysis and justify the efficiency of the proposed method.
This paper presents the design of a novel adaptive event-triggered control method based on the heuristic dynamic programming (HDP) technique for nonlinear discrete-time systems with unknown system dynamics. In the proposed method, the control law is only updated when the event-triggered condition is violated. Compared with the periodic updates in the traditional adaptive dynamic programming (ADP) control, the proposed method can reduce the computation and transmission cost. An actor-critic framework is used to learn the optimal event-triggered control law and the value function. Furthermore, a model network is designed to estimate the system state vector. The main contribution of this paper is to design a new trigger threshold for discrete-time systems. A detailed Lyapunov stability analysis shows that our proposed event-triggered controller can asymptotically stabilize the discrete-time systems. Finally, we test our method on two different discrete-time systems, and the simulation results are included.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.