Large medical image datasets form a rich source of anatomical descriptions for research into pathology and clinical biomarkers. Many features may be extracted from data such as MR images to provide, through manifold learning methods, new representations of the population's anatomy. However, the ability of any individual feature to fully capture all aspects morphology is limited. We propose a framework for deriving a representation from multiple features or measures which can be chosen to suit the application and are processed using separate manifold-learning steps. The results are then combined to give a single set of embedding coordinates for the data. We illustrate the framework in a population study of neonatal brain MR images and show how consistent representations, correlating well with clinical data, are given by measures of shape and of appearance. These particular measures were chosen as the developing neonatal brain undergoes rapid changes in shape and MR appearance and were derived from extracted cortical surfaces, non-rigid deformations and image similarities. Combined single embeddings show improved correlations demonstrating their benefit for further studies such as identifying patterns in the trajectories of brain development. The results also suggest a lasting effect of age at birth on brain morphology, coinciding with previous clinical studies.