Multi-stage decision problems under uncertainty are abundant in process industries. Markov decision process (MDP) is a general mathematical formulation of such problems. Whereas stochastic programming and dynamic programming are the standard methods to solve MDPs, their unwieldy computational requirements limit their usefulness in real applications. Approximate dynamic programming (ADP) combines simulation and function approximation to alleviate the "curse-of-dimensionality" associated with the traditional dynamic programming approach. In this paper, the method of ADP, which abates the curse-of-dimensionality by solving the DP within a carefully chosen, small subset of the state space, was introduced; a survey of recent research directions within the field of ADP had been made.