The aim of the study is to develop a real-time eyeblink detection algorithm that can detect eyeblinks during the closing phase for a virtual reality headset (VR headset) and accordingly classify the eye’s current state (open or closed). The proposed method utilises analysis of a motion vector for detecting eyelid closure, and a Haar cascade classifier (HCC) for localising the eye in the captured frame. When the downward motion vector (DMV) is detected, a cross-correlation between the current region of interest (eye in the current frame) and a template image for an open eye is used for verifying eyelid closure. A finite state machine is used for decision making regarding eyeblink occurrence and tracking the eye state in a real-time video stream. The main contributions of this study are, first, the ability of the proposed algorithm to detect eyeblinks during the closing or the pause phases before the occurrence of the reopening phase of the eyeblink. Second, realising the proposed approach by implementing a valid real-time eyeblink detection sensor for a VR headset based on a real case scenario. The sensor is used in the ongoing study that we are conducting. The performance of the proposed method was 83.9% for accuracy, 91.8% for precision and 90.40% for the recall. The processing time for each frame took approximately 11 milliseconds. Additionally, we present a new dataset for non-frontal eye monitoring configuration for eyeblink tracking inside a VR headset. The data annotations are also included, such that the dataset can be used for method validation and performance evaluation in future studies.
In this study, the relationship between a person's walking speed and the perception threshold for discrete implicit repositioning during eyeblinks in a virtual environment is investigated. The aim is to estimate the perception thresholds for forward and backward repositioning during forward translation following eyeblink occurrences. A psychophysical method called Staircase Transformed and Weighted up/down is utilized to quantify the perception thresholds for forward and backward repositioning. The perception thresholds for this repositioning are estimated for three different walking speeds: slow (0.58 m/s), moderate (0.86 m/s), and fast (1.1 m/s). The collected observations are then analyzed using regression analysis. The estimated perception threshold values for imperceptible forward repositioning were 0.374, 0.635, and 0.897 meters for the abovementioned walking speeds, respectively. Moreover, the respective perception threshold values for imperceptible backward repositioning were 0.287, 0.430, and 0.572 meters for the same walking speeds. The findings reveal a proportional relationship between the perception threshold values and the participant's walking speed. As such, it is possible to imperceptibly reposition a participant at a greater distance when they are walking faster relative to the same situation when the participant is walking slower. In addition, the results show that there is more tolerance in forward discrete repositioning compared to backward discrete repositioning during forward translation. These findings enable the extension of the manipulation types utilized by the Redirected Walking Technique. More specifically, this allows for implementing a sophisticated composite redirected walking controller, which utilizes continuous and discrete translation gains simultaneously; this helps not only with reducing the cognitive load, but also with reducing the amount of physical space required to support infinite free exploration in an immersive virtual environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.