Recently, the learning-based LiDAR odometry has obtained robust estimation results in the field of mobile robot localization, but most of them are constructed based on the idea of supervised learning. In the network training stage, these supervised learning-based methods rely heavily on real pose labels, which is defective in practical applications. Different from these methods, a novel self-supervised LiDAR odometry, namely SSLO, is proposed in this article. The proposed SSLO only uses unlabeled point cloud data to train the three-view pose network to complete the robot localization task. Specifically, first, due to the sparseness and disorder of the original LiDAR point cloud, it is difficult to use deep convolutional neural networks for feature extraction. In this article, the spherical projection of the point cloud is used to convert the original point cloud into a regular vertex map. Then the vertex map obtained by the projection is used as the input of the neural network. Second, in the network training phase, SSLO uses multiple geometric losses for different situations of matching point clouds and introduces uncertainty weights when calculating the losses to reduce the interference of noise or moving objects in the scene. Last but not least, the proposed method is not only used in the simulation experiments based on the KITTI dataset and Apollo-SouthBay dataset but also applied to a real-world wheeled robot SLAM task. Extensive experimental results show that the proposed method has good performance in different environments.