Whole body oncological screening using CT images requires a good anatomical localisation of organs and the skeleton. While a number of algorithms for multi-organ localisation have been presented, developing algorithms for a dense anatomical annotation of the whole skeleton, however, has not been addressed until now. Only methods for specialised applications, e.g., in spine imaging, have been previously described. In this work, we propose an approach for localising and annotating different parts of the human skeleton in CT images. We introduce novel anatomical trilateration features and employ them within iterative scale-adaptive random forests in a hierarchical fashion to annotate the whole skeleton. The anatomical trilateration features provide high-level long-range context information that complements the classical local context-based features used in most image segmentation approaches. They rely on anatomical landmarks derived from the previous element of the cascade to express positions relative to reference points. Following a hierarchical approach, large anatomical structures are segmented first, before identifying substructures. We develop this method for bone annotation but also illustrate its performance, although not specifically optimised for it, for multi-organ annotation. Our method achieves average dice scores of 77.4 to 85.6 for bone annotation on three different data sets. It can also segment different organs with sufficient performance for oncological applications, e.g., for PET/CT analysis, and its computation time allows for its use in clinical practice.