Real-time and high-precision localization information is vital for many modules of unmanned vehicles. At present, a high-cost RTK (Real Time Kinematic) and IMU (Integrated Measurement Unit) integrated navigation system is often used, but its accuracy cannot meet the requirements and even fails in many scenes. In order to reduce the costs and improve the localization accuracy and stability, we propose a precise and robust segmentation-based Lidar (Light Detection and Ranging) localization system aided with MEMS (Micro-Electro-Mechanical System) IMU and designed for high level autonomous driving. Firstly, we extracted features from the online frame using a series of proposed efficient low-level semantic segmentation-based multiple types feature extraction algorithms, including ground, road-curb, edge, and surface. Next, we matched the adjacent frames in Lidar odometry module and matched the current frame with the dynamically loaded pre-build feature point cloud map in Lidar localization module based on the extracted features to precisely estimate the 6DoF (Degree of Freedom) pose, through the proposed priori information considered category matching algorithm and multi-group-step L-M (Levenberg-Marquardt) optimization algorithm. Finally, the lidar localization results were fused with MEMS IMU data through a state-error Kalman filter to produce smoother and more accurate localization information at a high frequency of 200Hz. The proposed localization system can achieve 3~5 cm in position and 0.05~0.1° in orientation RMS (Root Mean Square) accuracy and outperform previous state-of-the-art systems. The robustness and adaptability have been verified with localization testing data more than 1000 Km in various challenging scenes, including congested urban roads, narrow tunnels, textureless highways, and rain-like harsh weather.