Military medical planners must consider the dispatching of aerial military medical evacuation (MEDEVAC) assets when preparing for and executing major combat operations. The launch authority seeks to dispatch MEDEVAC assets such that prioritized battlefield casualties are transported quickly and efficiently to nearby medical treatment facilities. We formulate a Markov decision process (MDP) model to examine the MEDEVAC dispatching problem. The large size of the problem instance motivating this research suggests that conventional exact dynamic programming algorithms are inappropriate. As such, we employ approximate dynamic programming (ADP) techniques to obtain high quality dispatch policies relative to current practices. An approximate policy iteration algorithmic strategy is applied that utilizes least squares temporal differencing for policy evaluation. We construct a representative planning scenario based on contingency operations in northern Syria both to demonstrate the applicability of our MDP model and to examine the efficacy of our proposed ADP solution methodology. A designed computational experiment is conducted to determine how selected problem features and algorithmic features affect the quality of solutions attained by our ADP policies. Results indicate that the ADP policy outperforms the myopic policy (i.e., the default policy in practice) by up to nearly 31 percent with regard to a lifesaving performance metric, as compared for a baseline scenario. Moreover, the ADP policy provides decreased MEDE-VAC response times and utilization rates. These results benefit military medical planners interested in the development and implementation of cogent MEDE-VAC tactics, techniques, and procedures for application in combat situations with a high operations tempo.