The real-time and accurate three-dimensional object detection is one of the core tasks in the perception of autonomous driving environments. In recent years, the development of deep learning technology and lidar technology has led to significant advancements in the application of three-dimensional object detection algorithms in largescale general scenarios. However, existing lidar-based three-dimensional object detection algorithms still face challenges in complex traffic scenarios, and the difficulty lies in balancing the accuracy and inference speed of the algorithms. In this regard, the voxelbased single-stage three-dimensional object detection algorithm SECOND is used as the baseline algorithm and an efficient single-stage vehicle detection algorithm framework tailored for complex autonomous driving scenarios is proposed. Firstly, a residual structure is introduced and the feature channel number is reconstructed in the three-dimensional feature extraction backbone, which effectively reduce the loss of spatial geometric features in the point cloud during the feature extraction process and make the model training more stable. Secondly, the multi-scale feature fusion technology and a spatial feature attention mechanism are introduced and a more efficient two-dimensional feature fusion backbone is designed, which facilitates the learning of the model for vehicle size and orientation. The proposed algorithm is trained and validated on the open-source dataset ONCE. Compared to the baseline algorithm, the average detection accuracy for vehicles is improved by 5.64%, while maintaining an inference speed of 20 frames per second (FPS). This significantly enhances the algorithm's perception performance for vehicles in complex traffic scenarios.