Deblurring methods in dynamic scenes are a challenging problem. Recently, significant progress has been made for image deblurring methods based on deep learning. However, these methods usually stack ordinary convolutional layers or increase convolution kernel size, resulting in limited receptive fields, an unsatisfying deblurring effect, and a heavy computational burden. Therefore, we propose an improved U-Net (U-shaped Convolutional Neural Network) model to restore the blurred images. We first design the model structure, which mainly includes depth-wise separable convolution, residual depth-wise separable convolution, wavelet transform, inverse wavelet transform, and a DMRFC (dense multireceptive field channel) module. Next, a depth-wise separable convolution is designed, which reduces model calculations and the number of parameters when compared with the standard convolution. A residual depth-wise separable convolution is designed, which allows for propagation of detailed information from different layers when compared with standard convolution and a standard residual block. The wavelet transform realizes downsampling by separating the contextual and texture information of the image. It also reduces model training difficulty. The inverse wavelet transform realizes upsampling, which reduces the loss of image information. Finally, by combining an extensional receptive field and channel attention mechanism, a DMRFC module is proposed to extract detailed image information, which further improves the reconstructed image quality via inverse wavelet transform. Experiments on the public dataset GOPRO show that the image deblurring method in this paper has higher-quality visual effects, while the PSNR (peak signal-to-noise ratio) rises to 30.83 and SSIM (structural similarity) rises to 0.948. Fewer model parameters and a shorter recovery time are needed, which provides a more lightweight image deblurring method.