Brain-computer interfaces (BCI) have been developed to allow users to communicate with the external world by translating brain activity into control signals. Motor imagery (MI) has been a popular paradigm in BCI control where the user imagines movements of e.g., their left and right limbs and classifiers are then trained to detect such intent directly from electroencephalography (EEG) signals. For some users, however, it is difficult to elicit patterns in the EEG signal that can be detected with existing features and classifiers. As such, new user control strategies and training paradigms have been highly sought-after to help improve motor imagery performance. Virtual reality (VR) has emerged as one potential tool where improvements in user engagement and level of immersion have shown to improve BCI accuracy. Motor priming in VR, in turn, has shown to further enhance BCI accuracy. In this pilot study, we take the first steps to explore if multisensory VR motor priming, where haptic and olfactory stimuli are present, can improve motor imagery detection efficacy in terms of both improved accuracy and faster detection. Experiments with 10 participants equipped with a biosensor-embedded VR headset, an off-the-shelf scent diffusion device, and a haptic glove with force feedback showed that significant improvements in motor imagery detection could be achieved. Increased activity in the six common spatial pattern filters used were also observed and peak accuracy could be achieved with analysis windows that were 2 s shorter. Combined, the results suggest that multisensory motor priming prior to motor imagery could improve detection efficacy.