Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology 2022
DOI: 10.1145/3526113.3545628
|View full text |Cite
|
Sign up to set email alerts
|

Detecting Input Recognition Errors and User Errors using Gaze Dynamics in Virtual Reality

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(1 citation statement)
references
References 52 publications
0
1
0
Order By: Relevance
“…Another approach for addressing the Midas Touch problem is to expand the input dimension such as leveraging multi-modal inputs, which combines gaze with another input modality. The typical multi-modal inputs combine the gaze direction and controller inputs [23,33]. Other works combine gaze and head information.…”
Section: Gaze-based Vr Interactionmentioning
confidence: 99%
“…Another approach for addressing the Midas Touch problem is to expand the input dimension such as leveraging multi-modal inputs, which combines gaze with another input modality. The typical multi-modal inputs combine the gaze direction and controller inputs [23,33]. Other works combine gaze and head information.…”
Section: Gaze-based Vr Interactionmentioning
confidence: 99%