2021 22nd IEEE International Conference on Industrial Technology (ICIT) 2021
DOI: 10.1109/icit46573.2021.9453581
|View full text |Cite
|
Sign up to set email alerts
|

Human Movement Direction Prediction using Virtual Reality and Eye Tracking

Abstract: One way of potentially improving the use of robots in a collaborative environment is through prediction of human intention that would give the robots insight into how the operators are about to behave. An important part of human behaviour is arm movement and this paper presents a method to predict arm movement based on the operator's eye gaze. A test scenario has been designed in order to gather coordinate based hand movement data in a virtual reality environment. The results shows that the eye gaze data can s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 24 publications
0
7
0
Order By: Relevance
“…The VRE designed to collect the data consists of four stages: language selection where the test participant selects whether the written instructions in the VRE should be given in Swedish or English, ET calibration, an information form where the participant enters age, gender, and whether they are right handed or not, and the last stage is the test itself. The test stage, Figure 1 , is an alteration of the test in Pettersson and Falkman ( 2021 ), see below.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…The VRE designed to collect the data consists of four stages: language selection where the test participant selects whether the written instructions in the VRE should be given in Swedish or English, ET calibration, an information form where the participant enters age, gender, and whether they are right handed or not, and the last stage is the test itself. The test stage, Figure 1 , is an alteration of the test in Pettersson and Falkman ( 2021 ), see below.…”
Section: Methodsmentioning
confidence: 99%
“…The test sequence was randomized as suggested in future improvements by Pettersson and Falkman ( 2021 ).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…These systems are wearable, cumbersome, user-unfriendly and the electrodes are biase at some positions. The second group is a multi-modal-based sensors, which combined two or more sensors to capture other input features that can assist in detecting or recognizing some events in the gesture as in [13], [14], [15], [16] which generally: required calibration of the sensors first which make the system to be complex and unfriendly. The third group is Video oculography (VOG) which is the most adopted nowadays because, it can capture the images of the subject eyes, estimate eyes positions, and point of gaze (POG) i.e where the user is looking [17], [18].…”
Section: Related Workmentioning
confidence: 99%
“…Sequential illustration of the shooting scenario for the data collection. Source: [33]. also been used to automatically extract feature maps from joints connected spatially between each other, as well as temporally through time [15].…”
Section: Action Classificationmentioning
confidence: 99%