This paper gives new results on the design of iterative learning control laws that enables one step design of a stabilizing feedback controller in the time domain and a feedforward (learning) controller which guarantees convergence in the trial domain. The Kalman-Yakubovich-Popov lemma is central to the analysis and the resulting computations use convex optimization over linear matrix inequalities. An illustrative example is given based on the model of an experimental facility that has been used to compare alternative iterative learning control designs.
I. INTRODUCTIONMany industrial systems operate over a finite duration and make repeated executions of the same task where after each one is completed the system returns to the starting position ready for the next to begin. A generic example is a robot executing a pick and place operation where the steps involved are: i) collect a payload from a fixed location, ii) transfer it over a finite duration, iii) place it on a conveyer under synchronization and iv) return to the starting location and then repeat i)-iv) as many times as required.Each execution is known as a trial and the novel feature of iterative learning control (ILC) is to use information from the previous trial, or a finite number of, in the calculation of the input to be used on the next one and hence improve performance from trial-to-trial. The survey papers [1], [2] are one starting point for a comprehensive overview of this control systems design method.In ILC if y ref (t) is the reference signal and the subscript k is used to denote the trial number then, for a plant with output y, e k (t) = y ref (t) − y k (t) is the corresponding error. Then the problem constructing a sequence of inputs such that the performance achieved is gradually improving with each successive trial can be refined to a convergence condition on the input and error, i.e.,