SAR images commonly suffer from speckle noise, posing a significant challenge in their analysis and interpretation. Existing convolutional neural network (CNN) based despeckling methods have shown great performance in removing speckle noise. However, these CNN-based methods have a few limitations. They do not decouple complex background information in a multi-resolution manner. Moreover, they have deep network structures that may result in many parameters, limiting their applicability to mobile devices. Furthermore, extracting key speckle information in the presence of complex background is also a major problem with SAR. The proposed study addresses these limitations by introducing a lightweight pyramid and attention-based despeckling (PAN-Despeck) network. The primary objective is to enhance image quality and enable improved information interpretation, particularly on mobile devices and scenarios involving complex backgrounds. The PAN-Despeck network leverages domainspecific knowledge and integrates Gaussian Laplacian image pyramid decomposition for multi-resolution image analysis. By utilizing this approach, complex background information can be effectively decoupled, leading to enhanced despeckling performance. Furthermore, the attention mechanism selectively focuses on key speckle features and facilitates complex background removal. The network incorporates recursive and residual blocks to ensure computational efficiency and accelerate training speed, making it lightweight while maintaining high performance. Through comprehensive evaluations, it is demonstrated that PAN-Despeck outperforms existing image restoration methods. With an impressive average peak signal-to-noise ratio (PSNR) of 28.355114 and a remarkable structural similarity index (SSIM) of 0.905467, it demonstrates exceptional performance in effectively reducing speckle noise in SAR images. The source code for the PAN-DeSpeck network is available on GitHub.