2019
DOI: 10.1007/s12193-019-00305-y
|View full text |Cite
|
Sign up to set email alerts
|

GG Interaction: a gaze–grasp pose interaction for 3D virtual object selection

Abstract: During the last two decades, development of 3D object selection techniques has been widely studied because it is critical for providing an interactive virtual environment to users. Previous techniques encounter difficulties with selecting small or distant objects, as well as naturalness and physical fatigue. Although eye-hand based interaction techniques have been promoted as the ideal solution to these problems, research on eye-hand based spatial interaction techniques in 3D virtual spaces has progressed very… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
2
2

Relationship

1
9

Authors

Journals

citations
Cited by 23 publications
(13 citation statements)
references
References 32 publications
0
13
0
Order By: Relevance
“…They explored this technique for object selection, manipulation, scene navigation, menu interaction, and image zooming. Similarly, Ryu et al [276] introduced a combined grasp eye-pointing technique for 3D object selection. Kyto et al [190] combined head and eye gaze for improving target selection in AR.…”
Section: Multimodal Interactionmentioning
confidence: 99%
“…They explored this technique for object selection, manipulation, scene navigation, menu interaction, and image zooming. Similarly, Ryu et al [276] introduced a combined grasp eye-pointing technique for 3D object selection. Kyto et al [190] combined head and eye gaze for improving target selection in AR.…”
Section: Multimodal Interactionmentioning
confidence: 99%
“…Both rigid shapes (e.g., rectangular [107]) or circular [108] and flexible shapes (e.g., [109]) have been used, as well as various display media (e.g., projection on cardboard [17,107]), transparent props [12,98], handheld touchscreens [40,59], or virtual lenses [64,82]. In addition, the combination of eye-gaze with other modalities such as touch [85,86], mid-air gestures [87,97,101] and head-movements [56,103,104] has been recently investigated for interaction in spatial user interfaces. For a recent survey on gaze-based interaction in AR and VR, see Hirzle et al [47].…”
Section: Spatial Interactionmentioning
confidence: 99%
“…They considered the selection of small objects, with an exception to overlapping ones [34]. Ryu et al introduced a method that used gaze tracking to approximately determine the candidate objects for selection [26]. Then, the thickness of the candidate object and the pose of the user's hand were matched to select the target.…”
Section: Gaze-supported Object Selection Techniquesmentioning
confidence: 99%