In this paper, we develop a model-free approximate dynamic programming method for stochastic systems modeled as Markov decision processes to maximize the probability of satisfying high-level system specifications expressed in a subclass of temporal logic formulas-syntactically cosafe linear temporal logic. Our proposed method includes two steps: First, we decompose the planning problem into a sequence of sub-problems based on the topological property of the task automaton which is translated from a temporal logic formula. Second, we extend a model-free approximate dynamic programming method to solve value functions, one for each state in the task automaton, in an order reverse to the causal dependency. Particularly, we show that the run-time of the proposed algorithm does not grow exponentially with the size of specifications. The correctness and efficiency of the algorithm are demonstrated using a robotic motion planning example.