In this study, we integrated virtual reality (VR) goggles and a motor imagery (MI) brain-computer interface (BCI) algorithm with a lower-limb rehabilitation exoskeleton robot (LLRER) system. The MI-BCI system was integrated with the VR goggles to identify the intention classification system. The VR goggles enhanced the immersive experience of the subjects during data collection. The VR-enhanced electroencephalography (EEG) classification model of a seated subject was directly applied to the rehabilitation of the LLRER wearer. The experimental results showed that the VR goggles had a positive effect on the classification accuracy of MI-BCI. The best results were obtained with subjects in a seated position wearing VR, but the seated VR classification model cannot be directly applied to rehabilitation triggers in the LLRER. There were a number of confounding factors that needed to be overcome. This study proposes a cumulative distribution function (CDF) auto-leveling method that can apply the seated VR model to standing subjects wearing exoskeletons. The classification model of seated VR had an accuracy of 75.35% in the open-loop test of the LLRER, and the accuracy of correctly triggering the rehabilitation action in the closed-loop gait rehabilitation of LLRER was 74%. Preliminary findings regarding the development of a closed-loop gait rehabilitation system activated by MI-BCI were presented.