We address the problem of object modeling from 3D and 3D+T data made up of images, which contain different parts of an object of interest, are separated by large spaces, and are misaligned with respect to each other. These images have only a limited number of intersections, hence making their registration particularly challenging. Furthermore, such data may result from various medical imaging modalities and can, therefore, present very diverse spatial configurations. Previous methods perform registration and object modeling (segmentation and interpolation) sequentially. However, sequential registration is ill-suited for the case of images with few intersections. We propose a new methodology, which, regardless of the spatial configuration of the data, performs the three stages of registration, segmentation, and shape interpolation from spaced and misaligned images simultaneously. We integrate these three processes in a level set framework, in order to benefit from their synergistic interactions. We also propose a new registration method that exploits segmentation information rather than pixel intensities, and that accounts for the global shape of the object of interest, for increased robustness and accuracy. The accuracy of registration is compared against traditional mutual information based methods, and the total modeling framework is assessed against traditional sequential processing and validated on artificial, CT, and MRI data.