Atmospheric turbulence is a key factor contributing to data distortion in mid-to-long-range target observation tasks. Neural networks have become a powerful tool for dealing with such problems due to their strong ability to fit nonlinearities in the spatial domain. However, the degradation in data is not confined solely to the spatial domain but is also present in the frequency domain. In recent years, the academic community has come to recognize the significance of frequency domain information within neural networks. There remains a gap in research on how to combine dual-domain information to reconstruct high-quality images in the field of blind turbulence image restoration. Drawing upon the close association between spatial and frequency domain degradation information, we introduce a novel neural network architecture, termed Dual-Domain Removal Turbulence Network (DDRTNet), designed to improve the quality of reconstructed images. DDRTNet incorporates multiscale spatial and frequency domain attention mechanisms, combined with a dual-domain collaborative learning strategy, effectively integrating global and local information to achieve efficient restoration of atmospheric turbulence-degraded images. Experimental findings demonstrate significant advantages in performance for DDRTNet compared to existing methods, validating its effectiveness in the task of blind turbulence image restoration.