I terative learning control (ILC) is based on the notion that the performance of a system that executes the same task multiple times can be improved by learning from previous executions (trials, iterations, passes). For instance, a basketball player shooting a free throw from a fixed position can improve his or her ability to score by practicing the shot repeatedly. During each shot, the basketball player observes the trajectory of the ball and consciously plans an alteration in the shooting motion for the next attempt. As the player continues to practice, the correct motion is learned and becomes ingrained into the muscle memory so that the shooting accuracy is iteratively improved. The converged muscle motion profile is an open-loop control generated through repetition and learning. This type of learned open-loop control strategy is the essence of ILC.We consider learning controllers for systems that perform the same operation repeatedly and under the same operating conditions. For such systems, a nonlearning con-troller yields the same tracking error on each pass. Although error signals from previous iterations are information rich, they are unused by a nonlearning controller. The objective of ILC is to improve performance by incorporating error information into the control for subsequent iterations. In doing so, high performance can be achieved with low transient tracking error despite large model uncertainty and repeating disturbances. ILC differs from other learning-type control strategies, such as adaptive control, neural networks, and repetitive control (RC). Adaptive control strategies modify the controller, which is a system, whereas ILC modifies the control input, which is a signal [1]. Additionally, adaptive controllers typically do not take advantage of the information contained in repetitive command signals. Similarly, neural network learning involves the modification of controller parameters rather than a control signal; in this case, large networks of nonlinear neurons are modified. These large networks require extensive training data, and fast convergence may be difficult to