2018
DOI: 10.1109/tnnls.2018.2791419
|View full text |Cite
|
Sign up to set email alerts
|

Optimal Guaranteed Cost Sliding Mode Control for Constrained-Input Nonlinear Systems With Matched and Unmatched Disturbances

Abstract: Based on integral sliding mode and approximate dynamic programming (ADP) theory, a novel optimal guaranteed cost sliding mode control is designed for constrained-input nonlinear systems with matched and unmatched disturbances. When the system moves on the sliding surface, the optimal guaranteed cost control problem of sliding mode dynamics is transformed into the optimal control problem of a reformulated auxiliary system with a modified cost function. The ADP algorithm based on single critic neural network (NN… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
56
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 85 publications
(56 citation statements)
references
References 47 publications
0
56
0
Order By: Relevance
“…To address the issue, this paper uses the trial-and-error method to obtain the initial admissible control policies, which is in the same spirit as the work of Yang et al 38 2. It seems to be challengeable to solve (30) for V ( j) (x), u ( j + 1) (x), and ( j + 1) (x). Fortunately, by using actor and critic approximators as well as the method of weighted residuals, 39 we can derive V ( j) (x), u ( j + 1) (x), and ( j + 1) (x) via (30).…”
Section: Off-policy Iteration Algorithmmentioning
confidence: 99%
See 4 more Smart Citations
“…To address the issue, this paper uses the trial-and-error method to obtain the initial admissible control policies, which is in the same spirit as the work of Yang et al 38 2. It seems to be challengeable to solve (30) for V ( j) (x), u ( j + 1) (x), and ( j + 1) (x). Fortunately, by using actor and critic approximators as well as the method of weighted residuals, 39 we can derive V ( j) (x), u ( j + 1) (x), and ( j + 1) (x) via (30).…”
Section: Off-policy Iteration Algorithmmentioning
confidence: 99%
“…It seems to be challengeable to solve (30) for V ( j) (x), u ( j + 1) (x), and ( j + 1) (x). Fortunately, by using actor and critic approximators as well as the method of weighted residuals, 39 we can derive V ( j) (x), u ( j + 1) (x), and ( j + 1) (x) via (30). The detailed procedure is illustrated in Section 4.2.…”
Section: Off-policy Iteration Algorithmmentioning
confidence: 99%
See 3 more Smart Citations