An important subfield of brain–computer interface is the classification of motor imagery (MI) signals where a presumed action, for example, imagining the hands' motions, is mentally simulated. The brain dynamics of MI is usually measured by electroencephalography (EEG) due to its noninvasiveness. The next generation of brain–computer interface systems can benefit from the generative deep learning (GDL) models by providing end‐to‐end (e2e) machine learning and increasing their accuracy. In this study, to exploit the e2e‐property of deep learning models, a novel GDL methodology is proposed where only minimal objective‐free preprocessing steps are needed. Furthermore, to deal with the complicated multi‐class MI–EEG signals, an innovative multilevel GDL‐based classifying scheme is proposed. The effectiveness of the proposed model and its robustness against noisy MI–EEG signals is evaluated using two different GDL models, that is, deep belief network and stacked sparse autoencoder in e2e manner. Experimental results demonstrate the effectiveness of the proposed methodology with improved accuracy compared with the widely used filter bank common spatial patterns algorithm.