Abstract. We propose a method for the segmentation of medical images based on a novel parameterization of prior shape knowledge and a search scheme based on classifying local appearance. The method uses diffusion wavelets to capture arbitrary and continuous interdependencies in the training data and uses them for an efficient shape model. The lack of classic visual consistency in complex medical imaging data, is tackled by a manifold learning approach handling optimal high-dimensional local features by Gentle Boosting. Appearance saliency is encoded in the model and segmentation is performed through the extraction and classification of the corresponding features in a new data set, as well as a diffusion wavelet based shape model constraint. Our framework supports hierarchies both in the model and the search space, can encode complex geometric and photometric dependencies of the structure of interest, and can deal with arbitrary topologies. Promising results are reported for heart CT data sets, proving the impact of the soft parameterization, and the efficiency of our approach.