The data processing of airborne full-waveform light detection and ranging (LiDAR) systems has become a research hotspot in the LiDAR field in recent years. However, the accuracy and reliability of full-waveform classification remain a challenge. The manual features and deep learning techniques in the existing methods cannot fully utilize the temporal features and spatial information in the full waveform. On the premise of preserving temporal dependencies, we convert them into Gramian angular summation field (GASF) images using the polar coordinate method. By introducing spatial attention modules into the neural network, we emphasize the importance of the location of texture information in GASF images. Finally, we use open source and simulated data to evaluate the impact of using different network architectures and transformation methods. Compared with the performance of the state-of-art method, our proposed method can achieve higher precision and F1 scores. The results suggest that transforming the full waveform into GASF images and introducing a spatial attention module outperformed other classification methods.