We consider the class of planning and sequential decision making problems where the state space has continuous components, but the available actions come from a discrete set, and argue that a suitable approach for solving them could involve an appropriate quantization scheme for the continuous state variables, followed by approximate dynamic programming. We propose one such scheme based on barycentric approximations that effectively converts the continuous dynamics into a Markov decision process, and demonstrate that it can be viewed both as an approximation to the continuous dynamics, as well as a value function approximator over the continuous domain. We describe the application of this method to several hard industrial problems, and point out additional candidate problems that could be amenable to it. AI Communications This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved.