2021 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR) 2021
DOI: 10.1109/aivr52153.2021.00013
|View full text |Cite
|
Sign up to set email alerts
|

Predicting Future Position From Natural Walking and Eye Movements with Machine Learning

Abstract: The prediction of human locomotion behavior is a complex task based on data from the given environment and the user. In this study, we trained multiple machine learning models to investigate if data from contemporary virtual reality hardware enables long-and short-term locomotion predictions. To create our data set, 18 participants walked through a virtual environment with different tasks. The recorded positional, orientation-and eye-tracking data was used to train an LSTM model predicting the future walking t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
35
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 16 publications
(36 citation statements)
references
References 45 publications
1
35
0
Order By: Relevance
“…Usually eye movements to a behavioral target precede any other motor action [18,32]. Therefore, they present a rich signal for the estimation of action intention [3,6,16,56,57]. However, during walking this does not mean that people lock their gaze on a future target, for example a door, at all times.…”
Section: Gaze Behavior During Walkingmentioning
confidence: 99%
See 3 more Smart Citations
“…Usually eye movements to a behavioral target precede any other motor action [18,32]. Therefore, they present a rich signal for the estimation of action intention [3,6,16,56,57]. However, during walking this does not mean that people lock their gaze on a future target, for example a door, at all times.…”
Section: Gaze Behavior During Walkingmentioning
confidence: 99%
“…Since our prediction model should be based on user behavior without information about the environment, it needed to use a coordinate system in a reference-frame attached to the user and not to the environment. For the present study, we used a head-fixed coordinate system, which has shown the best results previously (details in [6]).…”
Section: Prediction Model 461 Features and Labelsmentioning
confidence: 99%
See 2 more Smart Citations
“…In particular, it will combine sensorimotor inputs recorded from PPC with data regarding eye movements to provide end-users with the control of robotic devices. Furthermore, such human-centered AI will interact and have continuous bidirectional exchanges using gaze and other appropriate bahevioural parameters for action selection in the AI system, natural communication between a user and AI system and mutual learning [ 31 ]. These features ensure that the AI naturally and efficiently controls the neuroprosthesis or the assistive device motor output according to the end user’s intention and real needs.…”
Section: Introductionmentioning
confidence: 99%