Recently, the deep-learning-based PatchMatch method has been rapidly developed in 3D reconstruction, based on which boundary regions are filled with other parts that most closely match edge parts, but limited PatchMatch data hinder the generalization of the method to unknown settings. If various large-scale PatchMatch datasets are generated, the process would require considerable time and resources when performing neighborhood point-matching calculations using random iterative algorithms. To solve this issue, we first propose a new, sparse, semi-supervised stereo-matching framework called SGT-PatchMatchNet, which can reconstruct reliable 3D structures with a small number of 3D points using the ground truth of surface frame values. Secondly, in order to solve the problem of the luminosity inconsistency of some pixels in other views, a photometric similar-point loss function is proposed to improve the performance of 3D reconstruction, which causes the neighborhood information to project the depth value of the predicted depth to meet the same 3D coordinates. Finally, in order to solve the problem of the edge blurring of the depth map obtained using the network model, we propose a robust-point consistency loss function to improve the integrity and robustness of the occlusion and edge areas. The experimental results show that the proposed method not only has good visual effects and performance indicators but can also effectively reduce the amount of computation and improve the calculation time.