High-fidelity 3D human body reconstruction is challenging, as single-view methods often lead to distortions due to self-occlusion, and the existing multi-view approaches either focus on pose or exhibit limited performance. This study presents an efficient approach to realistic 3D human body reconstruction from front and back images, emphasizing symmetry and surface detail preservation. We begin by extracting the key points and pose information from dual-view images, applying SMPL-X to generate an initial 3D body. Then, using normal maps derived from both views, we infer high-fidelity surfaces and optimize SMPL-X based on these reconstructed surfaces. Through implicit modeling, we merge the front and back surfaces, ensuring a symmetric fusion boundary for a complete 3D body model. Our experimental results on the THuman2.0 dataset demonstrate our method’s effectiveness, with significant improvements in the surface detail fidelity. To validate the model’s accuracy further, we collected waist and chest circumference measurements from 120 individuals, finding an average measurement error below 0.8 centimeters, thus confirming the robustness of SMPL-X optimized with dual-view data.