In computer vision, accurately estimating a 3D human skeleton from a single RGB image remains a challenging task. Inspired by the advantages of multi-view approaches, we propose a method of predicting enhanced 2D skeletons (specifically, predicting the joints’ relative depths) from multiple virtual viewpoints based on a single real-view image. By fusing these virtual-viewpoint skeletons, we can then estimate the final 3D human skeleton more accurately. Our network consists of two stages. The first stage is composed of a two-stream network: the Real-Net stream predicts 2D image coordinates and the relative depth for each joint from the real viewpoint, while the Virtual-Net stream estimates the relative depths in virtual viewpoints for the same joints. Our network’s second stage consists of a depth-denoising module, a cropped-to-original coordinate transform (COCT) module, and a fusion module. The goal of the fusion module is to fuse skeleton information from the real and virtual viewpoints so that it can undergo feature embedding, 2D-to-3D lifting, and regression to an accurate 3D skeleton. The experimental results demonstrate that our single-view method can achieve a performance of 45.7 mm on average per-joint position error, which is superior to that achieved in several other prior studies of the same kind and is comparable to that of other sequence-based methods that accept tens of consecutive frames as the input.