2022 Symposium on Eye Tracking Research and Applications 2022
DOI: 10.1145/3517031.3529642
|View full text |Cite
|
Sign up to set email alerts
|

Real-time head-based deep-learning model for gaze probability regions in collaborative VR

Abstract: Figure 1: a) The head rotational velocity direction (θ ) and magnitude (ρ) are extracted during a VR session. b) Probability density functions are extracted from eye-gaze distributions that correspond to the head rotational velocity and are converted into a series of percentile-based contours (η). c) Our real-time model uses the three parameters (θ ,ρ,η) to provide a novel representation of visual attention for VR collaboration or interaction.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 33 publications
0
3
0
Order By: Relevance
“…The model employed is an MLP neural network tailored for gaze estimation. We introduced the head's pitch parameter compared to Bovo's study [2] referred to as "latitude"(φ ). The latitude, as a factor in our predictive model, played a crucial role in improving the accuracy of gaze prediction.…”
Section: Fovea Prediction Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…The model employed is an MLP neural network tailored for gaze estimation. We introduced the head's pitch parameter compared to Bovo's study [2] referred to as "latitude"(φ ). The latitude, as a factor in our predictive model, played a crucial role in improving the accuracy of gaze prediction.…”
Section: Fovea Prediction Modelsmentioning
confidence: 99%
“…It relies exclusively on head movements, making it suitable for real-time deployment on affordable HMDs. Our choice of the multi-perceptron (MLP) neural network (NN) is informed by its successful results in Bovo's work [2]. Our MLP architecture is a fully connected layer designed to learn the central fixation points and the outlines of probable fixation areas.…”
Section: Introductionmentioning
confidence: 99%
“…Head movements play a signifcant role in interaction as a means of expressing intent and eliciting emotions. Previous work has demonstrated the potential of machine learning models for detecting and classifying head gestures in a wide range of application scenarios [4,15,40,58]. For instance, Morency et al leveraged visual features (e.g., head velocities or eyegaze estimates) to propose an SVM model to classify head gestures as feedback nods and headshakes during interaction [35].…”
Section: Related Work 21 Classifcation Of Head-gaze Versus Head Gesturementioning
confidence: 99%