This study aims to use deep learning to recover the diffuse albedo of human images captured under a wide range of real-world lighting conditions. A key challenge here is the wide variety of textures found in full-body human images. While some aspects like skin color have a limited color range, clothing and accessories display a broad spectrum of colors and textures. As a result, creating a comprehensive dataset with accurate labels is unfeasible. To address this, we propose a data augmentation method that involves applying color-shifts to various semantic regions within our training images, all while maintaining realistic appearance. This process is accomplished by initially segmenting the ground-truth albedos into their respective components (e.g., pants, shirt, hair, etc.) using a pre-trained human parsing network. Then, we adjust their hue and intensity channels using randomly chosen values from a carefully defined distribution. Our results show significant improvements in albedo recovery, especially in clothing areas, and better performance with underrepresented skin tones.