Real-time and reliable three-dimensional (3D) object detection is critical for the autonomous driving system. Many 3D object detectors have recently adopted the pillar-based method represented by PointPillars due to its benefits of simplicity, speed, modularity, and ease of expansion. However, the detection effectiveness of the pillar-based method is limited by the lack of intra-pillar feature learning and insufficient multiscale pseudo-image feature extraction. Therefore, we present a multilayer pseudo-image 3D object detection method to enhance the performance of the pillar-based method. First, the proposed detector utilizes a multiscale feature extraction module to learn the point cloud representation within the pillars. The pillarized point cloud space is then converted into two pseudo-images of the same size, each encoding unique spatial information. After being individually processed via an inter-channel attention mechanism, these two pseudo-images get concatenated to form a global pseudo-feature map. Finally, a multiscale feature extraction backbone network with upper-and lower-level fusion is employed to process the global pseudo-feature map into a high-level representation. The multi-task detection head then generates the final detection results. Experimental findings on the KITTI dataset validate the effectiveness and superiority of the proposed method. The 3D mAP, BEV mAP, and mAOS of the proposed method are improved by 2.85%, 2.48%, and 2.76% compared to the traditional PointPillars, respectively.