In this paper, we propose new normal guided depth completion from sparse LiDAR data and single color image, named NNNet. Sparse depth completion often uses normal maps as a constraint for model training. However, direct construction of a normal map from the color image causes a lot of noise in the normal map and reduces the model performance. Thus, we use a new normal map as an intermediate constraint to promote the fusion of multi-modal features. We generate the new normal map from the sparse LiDAR depth data to use it as a constraint for network training. The new normal map is generated by converting the input depth into a grayscale image, constructing a normal map, replacing the Z channel of the normal map with the original depth, and finally adding a mask. Based on the new normal map, we construct an end-to-end network NNNet for sparse depth completion guided by its corresponding color image. NNNet consists of two branches. The one branch generates the new normal map from the depth image and its corresponding color image, while the other branch constructs a dense depth image from the sparse depth and the predicted new normal map. The two branches fully merge the features through skip connection. In loss function, we use L2 loss to ensure that the new normal map plays a restrictive role. Finally, we generate the dense depth image by refining it with a spatial propagation network. Experimental results show that the new normal map provides effective constraints for sparse depth completion. Moreover, NNNet achieves 724.14 in terms of RMSE and outperforms most of the current state-of-the-art methods.