Electroencephalogram (EEG) signals suffer substantially from motion artifacts when recorded in ambulatory settings utilizing wearable sensors. Because the diagnosis of many neurological diseases is heavily reliant on clean EEG data, it is critical to eliminate motion artifacts from motion-corrupted EEG signals using reliable and robust algorithms. Although a few deep learning-based models have been proposed for the removal of ocular, muscle, and cardiac artifacts from EEG data to the best of our knowledge, there is no attempt has been made in removing motion artifacts from motion-corrupted EEG signals: In this paper, a novel 1D convolutional neural network (CNN) called multi-layer multi-resolution spatially pooled (MLMRS) network for signal reconstruction is proposed for EEG motion artifact removal. The performance of the proposed model was compared with ten other 1D CNN models: FPN, LinkNet, UNet, UNet+, UNetPP, UNet3+, AttentionUNet, MultiResUNet, DenseInceptionUNet, and AttentionUNet++βin removing motion artifacts from motion-contaminated single-channel EEG signal. All the eleven deep CNN models are trained and tested using a single-channel benchmark EEG dataset containing 23 sets of motion-corrupted and reference ground truth EEG signals from PhysioNet. Leave-one-out cross-validation method was used in this work. The performance of the deep learning models is measured using three well-known performance matrices viz. mean absolute error (MAE)-based construction error, the difference in the signal-to-noise ratio (ΞSNR), and percentage reduction in motion artifacts (Ξ·). The proposed MLMRS-Net model has shown the best denoising performance, producing an average ΞSNR, Ξ·, and MAE values of 26.64Β dB, 90.52%, and 0.056, respectively, for all 23 sets of EEG recordings. The results reported using the proposed model outperformed all the existing state-of-the-art techniques in terms of average Ξ· improvement.