We revisit closed-loop performance guarantees for Model Predictive Control in the deterministic and stochastic cases, which extend to novel performance results applicable to receding horizon control of Partially Observable Markov Decision Processes. While performance guarantees similar to those achievable in deterministic Model Predictive Control can be obtained even in the stochastic case, the presumed stochastic optimal control law is intractable to obtain in practice. However, this intractability relaxes for a particular instance of stochastic systems, namely Partially Observable Markov Decision Processes, provided reasonable problem dimensions are taken. This motivates extending available performance guarantees to this particular class of systems, which may also be used to approximate general nonlinear dynamics via gridding of state, observation, and control spaces. We demonstrate applicability of the novel closed-loop performance results on a particular example in healthcare decision making, which relies explicitly on the duality of the control decisions associated with Stochastic Optimal Control in weighing appropriate appointment times, diagnostic tests, and medical intervention for treatment of a disease modeled by a Markov Chain.