2022
DOI: 10.1109/tvcg.2022.3203110
|View full text |Cite
|
Sign up to set email alerts
|

Gaze-Vergence-Controlled See-Through Vision in Augmented Reality

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(6 citation statements)
references
References 61 publications
0
6
0
Order By: Relevance
“…Previous works have shown the feasibility of visual depth estimations in the VR headsets by using binocular disparity and stereoscopic vergence in VR headset [10,40]. The concept of visual depth as a new interaction input has explored in the previous works, either by defining a semi-transparent window at a different focal depth [26,41,42], tracking voluntary vergence movements [3,13], or matching the vergence changes with the depth changes of a moving objects [34] in VR. However, all of these methods lack an end-to-end UI design to guide the users how to actively manipulate their visual depth.…”
Section: Gaze-based Vr Interactionmentioning
confidence: 99%
See 1 more Smart Citation
“…Previous works have shown the feasibility of visual depth estimations in the VR headsets by using binocular disparity and stereoscopic vergence in VR headset [10,40]. The concept of visual depth as a new interaction input has explored in the previous works, either by defining a semi-transparent window at a different focal depth [26,41,42], tracking voluntary vergence movements [3,13], or matching the vergence changes with the depth changes of a moving objects [34] in VR. However, all of these methods lack an end-to-end UI design to guide the users how to actively manipulate their visual depth.…”
Section: Gaze-based Vr Interactionmentioning
confidence: 99%
“…More recent works on gaze-based VR/AR interaction demonstrate the great potential of visual depth as an interaction input to solve the Midas touch problem. These methods either guide the user to look at physical or virtual objects at different depths [3,26,34,41,42] or rely on voluntary eye convergence and divergence [13,15] by asking the users focusing on the nose or imagining to fixate on some point behind the display plane. However, these works lack an intuitive and systematic User Interface (UI) design to guide the users in manipulating their visual depth, leading to limited application scenarios and potentially user frustration.…”
Section: Introductionmentioning
confidence: 99%
“…To ensure that the eyes in x f ro indeed look at (0, 0) with the head orientations unchanged, we introduce a reference face image x ref , which has eyeballs looking at (0, 0) and the same head orientation and identity as x, to constrain the generation of x f ro . We maximize Multi-Scale Structural Similarity (MS-SSIM) (Wang, Simoncelli, and Bovik 2003) between x ref and x f ro :…”
Section: Gazementioning
confidence: 99%
“…Gaze information is important for real applications. It indicates the direction or position at which a person is looking, and is widely used in many scenarios, such as augmented reality (Wang, Zhao, and Lu 2022) and autonomous driving (Mole et al 2021). To obtain this information, a number of gaze estimation methods have been proposed.…”
Section: Introductionmentioning
confidence: 99%
“…The human eye gaze tells what a person’s interests are, and it can be used as a medium for non-verbal communication. As a means of communication, the gaze can be used in many areas, such as human–computer interaction [ 1 ], human–robot interaction [ 2 ], virtual reality [ 3 ], augmented reality [ 4 ], and autonomous driving [ 5 ]. The goal of the gaze estimation task in computer vision is to estimate the gaze information of a subject from an input image that includes the subject’s face or eye(s).…”
Section: Introductionmentioning
confidence: 99%