Bundle Adjustment (BA) refers to the problem of simultaneous determination of sensor poses and scene geometry, which is a fundamental problem in robot vision. This paper presents an efficient and consistent bundle adjustment method for lidar sensors. The method employs edge and plane features to represent the scene geometry, and directly minimizes the natural Euclidean distance from each raw point to the respective geometry feature. A nice property of this formulation is that the geometry features can be analytically solved, drastically reducing the dimension of the numerical optimization. To represent and solve the resultant optimization problem more efficiently, this paper then proposes a novel concept point clusters, which encodes all raw points associated to the same feature by a compact set of parameters, the point cluster coordinates. We derive the closed-form derivatives, up to the second order, of the BA optimization based on the point cluster coordinates and show their theoretical properties such as the null spaces and sparsity. Based on these theoretical results, this paper develops an efficient second-order BA solver. Besides estimating the lidar poses, the solver also exploits the second order information to estimate the pose uncertainty caused by measurement noises, leading to consistent estimates of lidar poses. Moreover, thanks to the use of point cluster, the developed solver fundamentally avoids the enumeration of each raw point (which is very time-consuming due to the large number) in all steps of the optimization: cost evaluation, derivatives evaluation and uncertainty evaluation.The proposed method is extensively evaluated at different levels: consistency, accuracy, and computation efficiency in both simulated and actual environments. Benchmark evaluation on 19 real-world open sequences covering various datasets (Hilti, NTU-VIRAL and UrbanLoco), environments (campus, urban streets, offices, laboratory, and construction sites), lidar types (Ouster OS0-64, Ouster OS1-16, Velodyne HDL 32E), and motion types (handheld, UAV-based, and ground vehicles-based) shows that our method achieves consistently and significantly higher performance than other state-of-the-art counterparts in terms of localization accuracy, mapping quality, and computation efficiency. In particular, our method achieves a mapping accuracy at a level of the lidar measurement noise (i.e., a few centimeters) while processing all sequences in less than half minute on a standard desktop CPU. Finally, we show how our proposed method effectively improves the accuracy and/or computation efficiency of some important robotic techniques, including lidarinertial odometry, multi-lidar extrinsic calibration, and highaccuracy global mapping. The implementation of our method is open sourced to benefit the robotics community and beyond 2 .