This paper proposes a numerical technique, called turnpike improvement, for the approximation of the solution of a class of piecewise deterministic control problems typically associated with manufacturing flow control models. This algorithm exploits the structure of Markov decision processes with continuous state and action spaces that can be associated with piecewise deterministic control systems. The numerical method is applicable whenever a turnpike property holds for some associated infinite horizon deterministic control problem. To illustrate the approach, we use a simple model fully studied from an analytic point of view in the literature. We compare the turnpike improvement technique with a direct approximation of the solution of the continuous-time Hamilton-Jacobi dynamic programming equations inspired by Kushner's work. The two approaches agree remarkably on this simple problem. We conclude with a discussion of the relative advantages of the two approaches.KEY WORDS Stochastic control Infinite horizon optimal control Turnpike properties Policy improvement algorithm Piecewise deterministic control problems