The emerging field of brain–computer interface has significantly facilitated the analysis of electroencephalogram signals required for motor imagery classification tasks. However, the accuracy of EEG classification models has been restricted by the low signal‐to‐noise ratio, nonlinear nature of brain signals, and a lack of sufficient EEG data for training. To address these challenges, this study proposes a new approach that combines time‐frequency analysis with a hybrid parallel–series attention‐based deep learning network for EEG signal classification. The proposed framework comprises three main elements: first, a scaling‐basis chirplet transform designed to effectively capture the characteristics of nonstationary EEG signals; second, a hybrid parallel–series attention‐based deep learning network to extract features. The serial information flow continuously expands the receptive fields of output neurons, whereas parallel information flow extracts features based on different regions. Finally, machine learning classifiers are utilized to predict the corresponding motor imagery state. The developed EEG‐based motor imagery classification framework is assessed by two open‐source datasets, BCI competition III, dataset IIIa and BCI competition IV, dataset IIa and has achieved the average classification accuracy of 95.55% on BCI competition III, dataset IIIa and 90.18% on BCI competition IV, dataset IIa. The experimental findings illustrate that this study has attained promising motor imagery discrimination performance, surpassing existing techniques in terms of classification accuracy and kappa coefficient.