Video-based point cloud compression projects dynamic point clouds to 2D videos called geometry videos and texture videos for compression and transmission. In the geometry video, the pixel values represent the positions of each point in the point cloud. Due to distortion of the geometry video, there are some holes and noise points in the reconstructed point cloud. The previous methods for enhancing 2D images or videos can be applied to improve the quality of geometry videos. However, there will still be limitations in the reconstructed 3D point cloud because these methods do not take into account the feature of the 3D point cloud, such as the shape changes and surface orientations. To overcome these limitations, we propose an enhancement network specifically designed for point cloud enhancement through restoring geometry videos, which can calibrate the shape variations and surface orientations of point clouds. Specifically, the gradient information of the geometry video reflects the changes in pixel values, which correspond to changes in shape of the point cloud. Additionally, the normal vectors of the points in the point cloud provide the point cloud surface orientation information. To attain similarity of the shape changes, we design a two-branch network structure and utilize an additional branch comprising of gradient information restoring module to restore the gradient information of the geometry video. To ensure the consistency of point normal vectors, surface fitting and estimation module is designed to calculate the normal vectors of the points in distorted point cloud and the original point cloud through the geometry video, and then the difference of these normal vectors can be used to guide the restoration of the geometry video. Our method has been proved to be satisfactory through the validation experiments on currently available point cloud sequences.