In Virtual Reality (VR), a higher level of presence positively influences the experience and engagement of a user. There are several parameters that are responsible for generating different levels of presence in VR, including but not limited to, graphical fidelity, multi-sensory stimuli, and embodiment. However, standard methods of measuring presence, including self-reported questionnaires, are biased. This research focuses on developing a robust model, via machine learning, to detect different levels of presence in VR using multimodal neurological and physiological signals, including electroencephalography and electrodermal activity. An experiment has been undertaken whereby participants (N = 22) were each exposed to three different levels of presence (high, medium, and low) in a random order in VR. Four parameters within each level, including graphics fidelity, audio cues, latency, and embodiment with haptic feedback, were systematically manipulated to differentiate the levels. A number of multi-class classifiers were evaluated within a three-class classification problem, using a One-vs-Rest approach, including Support Vector Machine, k-Nearest Neighbour, Extra Gradient Boosting, Random Forest, Logistic Regression, and Multiple Layer Perceptron. Results demonstrated that the Multiple Layer Perceptron model obtained the highest macro average accuracy of $$93\pm 0.03\%$$
93
±
0.03
%
. Posthoc analysis revealed that relative band power, which is expressed as the ratio of power in a specific frequency band to the total baseline power, in both the frontal and parietal regions, including beta over theta and alpha ratio, and differential entropy were most significant in detecting different levels of presence.