Accurate 3D human pose estimation from single images is possible with sophisticated deep-net architectures that have been trained on very large datasets. However, this still leaves open the problem of capturing motions for which no such database exists. Manual annotation is tedious, slow, and error-prone. In this paper, we propose to replace most of the annotations by the use of multiple views, at training time only. Specifically, we train the system to predict the same pose in all views. Such a consistency constraint is necessary but not sufficient to predict accurate poses. We therefore complement it with a supervised loss aiming to predict the correct pose in a small set of labeled images, and with a regularization term that penalizes drift from initial predictions. Furthermore, we propose a method to estimate camera pose jointly with human pose, which lets us utilize multiview footage where calibration is difficult, e.g., for pan-tilt or moving handheld cameras. We demonstrate the effectiveness of our approach on established benchmarks, as well as on a new Ski dataset with rotating cameras and expert ski motion, for which annotations are truly hard to obtain.would be to annotate video data. However, achieving high accuracy would require a great deal of annotation, which is tedious, slow, and error-prone. As illustrated by Fig. 1, we therefore propose to replace most of the annotations by the use of multiple views, at training time only. Specifically, we use them to provide weak supervision and force the system to predict the same pose in all views.While such view consistency constraints increase accuracy, they are unfortunately not sufficient. For example, the network can learn to always predict the same pose, independently of the input image. To prevent this, we use a small set of images with ground-truth poses, which serve a dual purpose. First, they provide strong supervision during training. Second, they let us regularize the multi-view predictions by encouraging them to remain close to the predictions of a network trained with the scarce supervised data only.In addition, we propose to use a normalized pose distance to evaluate all losses involving poses. It disentangles pose from scale, and we found it to be key to maintain accuracy when the annotated data is scarce.Our experiments demonstrate the effectiveness of our weakly-supervised multi-view training strategy on several 1 arXiv:1803.04775v2 [cs.CV]