The operation of a variety of natural or man-made systems subject to uncertainty is maintained within a range of safe behavior through run-time sensing of the system state and control actions selected according to some strategy. When the system is observed from an external perspective, the control strategy may not be known and it should rather be reconstructed by joint observation of the applied control actions and the corresponding evolution of the system state. This is largely hurdled by limitations in the sensing of the system state and different levels of noise. We address the problem of optimal selection of control actions for a stochastic system with unknown dynamics operating under a controller with unknown strategy, for which we can observe trajectories made of the sequence of control actions and noisy observations of the system state which are labeled by the exact value of some reward functions. To this end, we present an approach to train an Input–Output Hidden Markov Model (IO-HMM) as the generative stochastic model that describes the state dynamics of a POMDP by the application of a novel optimization objective adopted from the literate. The learning task is hurdled by two restrictions: the only available sensed data are the limited number of trajectories of applied actions, noisy observations of the system state, and system state; and, the high failure costs prevent interaction with the online environment, preventing exploratory testing. Traditionally, stochastic generative models have been used to learn the underlying system dynamics and select appropriate actions in the defined task. However, current state of the art techniques, in which the state dynamics of the POMDP is first learned and then strategies are optimized over it, frequently fail because the model that best fits the data may not be well suited for controlling. By using the aforementioned optimization objective, we try to to tackle the problems related to model mis-specification. The proposed methodology is illustrated in a scenario of failure avoidance for a multi component system. The quality of the decision making is evaluated by using the collected reward on the test data and compared against the previous literature usual approach.