Motor Imagery Brain-Computer Interfaces (MI-BCIs) have gained a lot of attention in recent years thanks to their potential to enhance rehabilitation and control of prosthetic devices for individuals with motor disabilities. However, accurate classification of motor imagery signals remains a challenging task due to the high inter-subject variability and non-stationarity in the electroencephalogram (EEG) data. In the context of MI-BCIs, with limited data availability, the acquisition of EEG data can be difficult. In this study, several data augmentation techniques have been compared with the proposed data augmentation technique Adaptive Cross-Subject Segment Replacement (ACSSR). This technique, in conjunction with the proposed deep learning framework, allows for a combination of similar subject pairs to take advantage of one another and boost the classification performance of MI-BCIs. The proposed framework features a multi-domain feature extractor based on Common Spatial Patterns (CSP) with a sliding window and a parallel two-branch Convolutional Neural Network (CNN). The performance of the proposed methodology has been evaluated on the multi-class BCI Competition IV Dataset 2a through repeated 10- fold cross-validation. Experimental results indicated that the implementation of the ACSSR method (80.46%) in the proposed framework has led to a considerable improvement in the classification performance compared to the classification without data augmentation (77.63%), and other fundamental data augmentation techniques used in the literature. The study contributes to the advancements for the development of effective MI-BCIs by showcasing the ability of the ACSSR method to address the challenges in motor imagery signal classification tasks.