In example-based human pose estimation, the configuration of an evolving object is sought given visual evidence, having to rely uniquely on a set of sample images. We assume here that, at each time instant of a training session, a number of feature measurements is extracted from the available images, while ground truth is provided in the form of the true object pose. In this scenario, a sensible approach consists in learning maps from features to poses, using the information provided by the training set. In particular, multi-valued mappings linking feature values to set of training poses can be constructed. To this purpose we propose a Belief Modeling Regression (BMR) approach in which a probability measure on any individual feature space maps to a convex set of probabilities on the set of training poses, in a form of a belief function. Given a test image, its feature measurements translate into a collection of belief functions on the set of training poses which, when combined, yield there an entire family of probability distributions. From the latter either a single central pose estimate or a set of extremal ones can be computed, together with a measure of how reliable the estimate is. Contrarily to other competing models, in BMR the sparsity of the training samples can be taken into account to model the level of uncertainty associated with these estimates. We illustrate BMR's performance in an application to human pose recovery, showing how it outperforms our implementation of both Relevant Vector Machine and Gaussian Process Regression. Finally, we discuss motivation and advantages of the proposed approach with respect to its most direct competitors.