Abstract‐3D point cloud registration is a crucial topic in the reverse engineering, computer vision and robotics fields. The core of this problem is to estimate a transformation matrix for aligning the source point cloud with a target point cloud. Several learning‐based methods have achieved a high performance. However, they are challenged with both partial overlap point clouds and multiscale point clouds, since they use the singular value decomposition (SVD) to find the rotation matrix without fully considering the scale information. Furthermore, previous networks cannot effectively handle the point clouds having large initial rotation angles, which is a common practical case. To address these problems, this paper presents a learning‐based point cloud registration network, namely HDRNet, which consists of four stages: local feature extraction, correspondence matrix estimation, feature embedding and fusion and parametric regression. HDRNet is robust to noise and large rotation angles, and can effectively handle the partial overlap and multi‐scale point clouds registration. The proposed model is trained on the ModelNet40 dataset, and compared with ICP, SICP, FGR and recent learning‐based methods (PCRNet, IDAM, RGMNet and GMCNet) under several settings, including its performance on moving to invisible objects, with higher success rates. To verify the effectiveness and generality of our model, we also further tested our model on the Stanford 3D scanning repository.