2004
DOI: 10.1109/tsmcb.2003.818433
|View full text |Cite
|
Sign up to set email alerts
|

On Iterative Learning From Different Tracking Tasks in the Presence of Time-Varying Uncertainties

Abstract: In this paper, we introduce a new iterative learning control (ILC) method, which enables learning from different tracking control tasks. The proposed method overcomes the imitation of traditional ILC in that, the target trajectories of any two consecutive iterations can be completely different. For non-linear systems with time-varying and time-invariant parametric uncertainties, the new learning method works effectively to nullify the tracking error. To facilitate the learning control system design and analysi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
150
0

Year Published

2006
2006
2017
2017

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 192 publications
(151 citation statements)
references
References 21 publications
1
150
0
Order By: Relevance
“…In other words, the learning process remains effective even if the desired trajectory (or the reference model) is changing from iteration to iteration. This is one important advantage of this Lyapunov-based framework with respect to the traditional contraction mapping-based frameworks (see, for instance, Reference [13]). …”
Section: Proofmentioning
confidence: 99%
“…In other words, the learning process remains effective even if the desired trajectory (or the reference model) is changing from iteration to iteration. This is one important advantage of this Lyapunov-based framework with respect to the traditional contraction mapping-based frameworks (see, for instance, Reference [13]). …”
Section: Proofmentioning
confidence: 99%
“…In the sequel, the period of  1 is l 1 , where l 1 is the least common multiple of l 1 and l. The upper bound of  1 .t / is also defined accordingly. Following a similar procedure in [36], we can prove the asymptotic convergence under the switching PAC design.…”
Section: Remarkmentioning
confidence: 99%
“…It is unnecessary to still apply second-order ILC law to all parameters, especially to those iteration-invariant ones which can be dealt with simply by first-order learning scheme, for example in [17,22], etc.…”
Section: Extension To Unknown Time-varying Input Gain and Mixed Umentioning
confidence: 99%
“…For the part concerning uncertainty /, the method used in [17] is similarly applied here. As a special case of second-order internal model, the proof of / can also be considered as direct application of the framework established in Theorem 1.…”
Section: Extension To Unknown Time-varying Input Gain and Mixed Umentioning
confidence: 99%