Using insight from numerical approximation of ODEs and the problem formulation and solution methodology of TD learning through a Galerkin relaxation, I propose a new class of TD learning algorithms. After applying the improved numerical methods, the parameter being approximated has a guaranteed order of magnitude reduction in the Taylor Series error of the solution to the ODE for the parameter θ(t) that is used in constructing the linearly parameterized value function. Predictor-Corrector Temporal Difference (PCTD) is what I call the translated discrete time Reinforcement Learning(RL) algorithm from the continuous time ODE using the theory of Stochastic Approximation(SA). Both causal and non-causal implementations of the algorithm are provided, and simulation results are listed for an infinite horizon task to compare the original TD(0) algorithm against both versions of PCTD(0).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.