LiDAR odometry is a fundamental task for high-precision map construction and real-time and accurate localization in autonomous driving. However, point clouds in urban road scenes acquired by vehicle-borne lasers are of large amounts, “near dense and far sparse” density, and contain different dynamic objects, leading to low efficiency and low accuracy of existing LiDAR odometry methods. To address the above issues, a simulation-based self-supervised line extraction in urban road scene is proposed, as a pre-processing for LiDAR odometry to reduce the amount of input and the interference from dynamic objects. A simulated dataset is first constructed according to the characteristics of point clouds in urban road scenes; and then, an EdgeConv-based network, named LO-LineNet, is used for pre-training; finally, a model transferring strategy is adopted to transfer the pre-trained model from a simulated dataset to real-world scenes without ground-truth labels. Experimental results on the KITTI Odometry Dataset and the Apollo SouthBay Dataset indicate that the proposed method can accurately extract reliable lines in urban road scenes in a self-supervised way, and the use of the extracted reliable lines as input for odometry can significantly improve its accuracy and efficiency in urban road scenes.