Body composition can be assessed in many different ways. High-end medical equipment, such as Dualenergy X-ray Absorptiometry (DXA), Computed Tomography (CT) and Magnetic Resonance Imaging (MRI), offers high-fidelity pixel/voxel-level assessment but is prohibitive in cost. In the case of DXA and CT, the approach exposes users to ionizing radiation. Whole-body air displacement plethysmography (BOD POD) can accurately estimate body density, but the assessment is limited to the whole-body fat percentage. Optical 3D scan and reconstruction techniques, such as using depth cameras, have brought new opportunities for improving body composition assessment by intelligently analyzing body shape features. We present a novel supervised inference model to predict pixel-level body composition and percentage of body fat using 3D geometry features and body density. First, we use body density to model a fat distribution base prediction. Then, we use a Bayesian network to infer the probability of the base prediction bias with 3D geometry features. Finally, we correct the bias using non-parametric regression. We use DXA assessment as the ground truth in model training and validation. We compare our method, in terms of pixel-level body composition assessment, with the current stateof-the-art prediction models. Our method outperforms those prediction models by 52.69% on average. We also compare our method, in terms of whole-body fat percentage assessment, with the medical level equipment-BOD POD. Our method outperforms the BOD POD by 23.28%.