Recent studies underline the contribution of brain-computer interface (BCI) applications to the enhancement process of the life quality of physically impaired subjects. In this context, to design an effective stroke rehabilitation or assistance system, the classification of motor imagery (MI) tasks are performed through deep learning (DL) algorithms. Although the utilization of DL in the BCI field remains relatively premature as compared to the fields related to natural language processing, object detection, etc., DL has proven its effectiveness in carrying out this task.In this paper, a hybrid method, which fuses the one-dimensional convolutional neural network (1D CNN) with the long short-term memory (LSTM), was performed for classifying four different MI tasks, i.e. left hand, right hand, tongue, and feet movements. The time representation of MI tasks is extracted through the hybrid deep learning model training after principal component analysis (PCA)-based artefact removal process. The performance criteria given in the BCI Competition IV dataset A are estimated. 10-folded Cross-validation (CV) results show that the proposed method outperforms in classifying electroencephalogram (EEG)-electrooculogram (EOG) combined motor imagery tasks compared to the state of art methods and is robust against data variations. The CNN-LSTM classification model reached 95.62 % (±1.2290742) accuracy and 0.9462 (±0.01216265) kappa value for datasets with four MI-based class validated using 10-fold CV. Also, the receiver operator characteristic (ROC) curve, the area under the ROC curve (AUC) score, and confusion matrix are evaluated for further interpretations.