ACM Symposium on Eye Tracking Research and Applications 2020
DOI: 10.1145/3379156.3391829
|View full text |Cite
|
Sign up to set email alerts
|

Eye vs. Head: Comparing Gaze Methods for Interaction in Augmented Reality

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 14 publications
(10 citation statements)
references
References 6 publications
0
9
1
Order By: Relevance
“…From our observation, the issue was likely due to the nature of the matching task, which required the user to view the cube in parallel to the manipulation. This could also explain the difference in our findings from those of Pathmanathan et al [13], who collected feedback after a free manipulation task without a defined target object pose. One available way to tackle this issue would be improving the design of the interfaces to focus important information in the central and paracentral vision, such as showing a small copy of the rotated object at the eye gaze position.…”
Section: Design Implications and Future Workcontrasting
confidence: 91%
See 1 more Smart Citation
“…From our observation, the issue was likely due to the nature of the matching task, which required the user to view the cube in parallel to the manipulation. This could also explain the difference in our findings from those of Pathmanathan et al [13], who collected feedback after a free manipulation task without a defined target object pose. One available way to tackle this issue would be improving the design of the interfaces to focus important information in the central and paracentral vision, such as showing a small copy of the rotated object at the eye gaze position.…”
Section: Design Implications and Future Workcontrasting
confidence: 91%
“…She also defined a free-rotation mode that gradually aligns a selected point with the forward facing direction. Pathmanathan et al [13] compared head and eye-gaze for manipulating object transformations using these manipulation techniques. They found that users preferred head-gaze, while there was no difference in the preference between the constrained and free-rotation methods.…”
Section: Object Orientationmentioning
confidence: 99%
“…Some research has already pioneered gaze-assisted UI to access more contextual information Ajanki et al [2011], Lu et al [2020], Pathmanathan et al [2020], Sasikumar et al [2019]. It has been shown that the optical see-through (OST) HMD which harnesses human gaze with eye-tracking as the interaction metaphor can contribute to efficient results Looser et al [2007].…”
Section: Ar With Gaze Assistancementioning
confidence: 99%
“…There have also been numerous gaze-based interaction techniques that have been proposed to improve interaction by increasing target or cursor sizes via zooming [12,39], the use of area cursors [12], cursors that can be nudged via gaze-based buttons [56], or by the incremental disambiguation of possible targets [42]. However, these strategies do not guarantee the removal of tracker errors as demonstrated by prior work where participant data was discarded due to calibration and tracking issues [1,4,15,19,45,48,50,53,59]. Thus, this research proposes the use of fallback modalities to make gaze-based systems more robust and accessible to users.…”
Section: Handling Eye Tracking Errormentioning
confidence: 99%