In this paper, an output-feedback solution to the infinite-horizon linear quadratic tracking (LQT) problem for unknown discrete-time systems is proposed. An augmented system composed of the system dynamics and the reference trajectory dynamics is constructed. The state of the augmented system is constructed from a limited number of measurements of the past input, output, and reference trajectory in the history of the augmented system. A novel Bellman equation is developed that evaluates the value function related to a fixed policy by using only the input, output, and reference trajectory data from the augmented system. By using approximate dynamic programming, a class of reinforcement learning methods, the LQT problem is solved online without requiring knowledge of the augmented system dynamics only by measuring the input, output, and reference trajectory from the augmented system. We develop both policy iteration (PI) and value iteration (VI) algorithms that converge to an optimal controller that require only measuring the input, output, and reference trajectory data. The convergence of the proposed PI and VI algorithms is shown. A simulation example is used to verify the effectiveness of the proposed control scheme.
Th i s paper presents an onl i ne solut i on to the i nfin i te-hor i zon l i near quadrat i c tracker (LQT) us i ng reinforcement learn i ng. It i s first assumed that the value funct i on for the LQT i s quadrat i c i n terms of the reference trajectory and the state of the system. Then, us i ng the quadrat i c form of the value funct i on, an augmented algebraic Riccati equat i on (ARE) is der i ved to solve the LQT. Us i ng th i s formulat i on, both feedback and feedforward parts of the opt i mal control solut i on are obta i ned s i multaneously by solv i ng the augmented ARE. To find the solut i on to the augmented ARE onl i ne, pol i cy i terat i on as a class of re i nforcement learn i ng algor i thms, is employed. Th i s algor i thm i s i mplemented on an actor-cr i t i c structure by us i ng two neural networks and i t does not need the knowledge of the dr i ft system dynam i cs or the command generator dynam i cs. A s i mulat i on example shows that the proposed algor i thm works for a system with partially unknown dynam i cs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.