2023
DOI: 10.1109/tvcg.2023.3247058
|View full text |Cite
|
Sign up to set email alerts
|

Leveling the Playing Field: A Comparative Reevaluation of Unmodified Eye Tracking as an Input and Interaction Modality for VR

Abstract: Head Controller EyesFig. 1: Head input modality (left) with red cursor on target using ISO 9241-9 style double rings target positioning. Faint target placeholder dots show where targets can appear. Controller input modality (center), using the same red cursor, with a 0.5m thin white rod, using Random Web target positioning. Unmodified Eye input modality (right) with no cursor feedback.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 25 publications
(9 citation statements)
references
References 61 publications
1
8
0
Order By: Relevance
“…In the context of our low vision project, we were particularly interested by the ability of a subject to select a target with different kinds of techniques. Although the ability to select targets with different kinds of controllers has been heavily investigated in the VR literature with normally sighted persons ( Fernandes, Murdison, & Proulx, 2023 ; Yu et al, 2018 ), it seems that this topic has been overlooked with low vision persons.…”
Section: Resultsmentioning
confidence: 99%
“…In the context of our low vision project, we were particularly interested by the ability of a subject to select a target with different kinds of techniques. Although the ability to select targets with different kinds of controllers has been heavily investigated in the VR literature with normally sighted persons ( Fernandes, Murdison, & Proulx, 2023 ; Yu et al, 2018 ), it seems that this topic has been overlooked with low vision persons.…”
Section: Resultsmentioning
confidence: 99%
“…Hence, it is a two-step procedure where the user’s gaze acts as a cursor and the selection is triggered by an additional modality. Various modalities have been investigated, including eye blinks [71], speech [63, 96, 122], head movements [115, 121], handheld devices [29, 64], tongue-based interfaces [34], body movements [57], and hand gestures such as the pinch gesture [64, 82, 99]. Additional gestures, such as gaze-hand alignment [74] confirm target selection either by aligning a ray that emanates from the user’s hand with their gaze (Gaze&Handray) or aligning the user’s finger with their gaze (Gaze&Finger) [117].…”
Section: Related Workmentioning
confidence: 99%
“…To mitigate the Midas touch, HCI researchers have proposed the use of an external trigger as a form of manual input [128]. This solution has been implemented in a variety of ways from voluntary blinks [71] to simply pressing a button on a physical controller [29] (see Section 2.1). These techniques improve selection accuracy, however, they require physical exertion hence potentially introducing fatigue over extended use [45].…”
Section: Introductionmentioning
confidence: 99%
“…issue 2, June 2024 Furthermore, the Meta Quest 2, a leading VR device, has garnered attention for further research and development. Efforts have been made to measure controller precision [26], hand movement accuracy [27], eye-tracking evaluations [28], integration of haptic features [29], [30], and the creation of 3D Avatar applications [31].…”
Section: Introductionmentioning
confidence: 99%