Herein, we investigated the effects of using time segments, including visual presentation, motor imagery, and rest time, as training data in a brain-computer interface (BCI) competition. Using BCI Competition IV 2a and 2b, many researchers have attempted to create more robust classifiers with higher classification accuracy. Some studies have also used visual presentation time and rest time as training data. However, the use of training data outside of motor imagery makes comparisons of performance across models difficult, and may lead to models that are overfitted to the experimental environment. In addition, it is possible that brain activity other than motor imagery is involved in visual presentation. Hence, to examine the effects of the selection of training data, we compared several classifiers, including linear discriminant analysis (LDA), support vector machine, and convolutional neural networks (CNN), trained with data including visual presentation time and rest time, with data only during motor imagery. The results showed an improvement in performance when BCI Competition IV 2a and 2b data included visual presentation information in the training data. For the greatest improvement among participants, training data with visual presentation improved the accuracy by 13.44 % and 10.14 % in BCI Competition IV 2a (participant 9) for LDA and CNN, respectively; and by 8.38 % and 16.68 % in BCI Competition IV 2b (participant 3) for LDA and CNN, respectively. Training data that includes visual presentation information improves model performance, therefore, we recommend using only motor imagery time to train the model.INDEX TERMS Brain-computer interfaces, convolutional neural networks, deep learning, electroencephalography, machine learning.