In this paper we propose a process which is able to generate abstract service robot mission representations, utilized during execution for autonomous, probabilistic decision making, by observing human demonstrations. The observation process is based on the same perceptive components as used by the robot during execution, recording dialog between humans, human motion as well as objects poses. This leads to a natural, practical learning process, avoiding extra demonstration centers or kinesthetic teaching. By generating mission models for probabilistic decision making as Partially observable Markov decision processes, the robot is able to deal with uncertain and dynamic environments, as encountered in real world settings during execution. Service robot missions in a cafeteria setting, including the modalities of mobility, natural human-robot interaction and object grasping, have been learned and executed by this system.