LiDAR‐based SLAM is recognized as one effective method to offer localization guidance in rough environments. However, off‐the‐shelf LiDAR‐based SLAM methods suffer from significant pose estimation drifts, particularly components relevant to the vertical direction, when passing to uneven terrains. This deficiency typically leads to a conspicuously distorted global map. In this article, a LiDAR‐based SLAM method is presented to improve the accuracy of pose estimations for ground vehicles in rough terrains, which is termed Rotation‐Optimized LiDAR‐Only (ROLO) SLAM. The method exploits a forward location prediction to coarsely eliminate the location difference of consecutive scans, thereby enabling separate and accurate determination of the location and orientation at the front‐end. Furthermore, we adopt a parallel‐capable spatial voxelization for correspondence‐matching. We develop a spherical alignment‐guided rotation registration within each voxel to estimate the rotation of vehicle. By incorporating geometric alignment, we introduce the motion constraint into the optimization formulation to enhance the rapid and effective estimation of LiDAR's translation. Subsequently, we extract several keyframes to construct the submap and exploit an alignment from the current scan to the submap for precise pose estimation. Meanwhile, a global‐scale factor graph is established to aid in the reduction of cumulative errors. In various scenes, diverse experiments have been conducted to evaluate our method. The results demonstrate that ROLO‐SLAM excels in pose estimation of ground vehicles and outperforms existing state‐of‐the‐art LiDAR SLAM frameworks.