Estimating 3D human poses from 2D poses is a challenging problem due to joints selfocclusion, weak generalization, and inherent ambiguity of recovering depth. Actually, there exists spatial structure dependence on human body key points which can be used to alleviate the problem of joints selfocclusion. Therefore, we represent human pose as a directed graph and propose a network implemented with graph convolution to predict 3D poses from the given 2D poses. In the digraph, we determine the connection weight of each edge according to the error distribution of joints estimation. This makes our model robust to noise. By optimizing coarse 3D estimation and adversarial learning, our algorithm can successfully improve the accuracy of estimation and relieve the ambiguity of mapping. Through testing on Human 3.6M and MPI-INF-3DHP datasets, we achieve excellent quantitative performance. More importantly, our algorithm also has a superior generalization to outdoor dataset MPII by the pre-training process.INDEX TERMS 3D human poses, graph convolutional networks, adversarial learning, geometric priors, gradient vanish, in-the-wild scenes.