Motor Imagery (MI) is a mental process by which an individual rehearses body movements without actually performing physical actions. Motor Imagery Brain-Computer Interfaces (MI-BCIs) are AI-driven systems that capture brain activity patterns associated with this mental process and convert them into commands for external devices. Traditionally, MI-BCIs operate on Machine Learning (ML) algorithms, which require extensive signal processing and feature engineering to extract changes in sensorimotor rhythms (SMR). However, in recent years, Deep Learning (DL) models have gained popularity for EEG classification as they provide a solution for automatic extraction of spatio-temporal features in the signals. In this study, EEG signals from 54 subjects who performed a MI task of left- or right-hand grasp was employed to compare the performance of two MI-BCI classifiers; a ML approach vs. a DL approach. In the ML approach, Common Spatial Patterns (CSP) was used for feature extraction and then Linear Discriminant Analysis (LDA) model was employed for binary classification of the MI task. In the DL approach, a Convolutional Neural Network (CNN) model was constructed on the raw EEG signals. The mean classification accuracies achieved by the CNN and CSP+LDA models were 69.42% and 52.56%, respectively. Further analysis showed that the DL approach improved the classification accuracy for all subjects within the range of 2.37 to 28.28% and that the improvement was significantly stronger for low performers. Our findings show promise for employment of DL models in future MI-BCI systems, particularly for BCI inefficient users who are unable to produce desired sensorimotor patterns for conventional ML approaches.