Proceedings of the 18th ACM International Conference on Multimodal Interaction 2016
DOI: 10.1145/2993148.2993156
|View full text |Cite
|
Sign up to set email alerts
|

Visuotactile integration for depth perception in augmented reality

Abstract: Augmented reality applications using stereo head-mounted displays are not capable of perfectly blending real and virtual objects. For example, depth in the real world is perceived through cues such as accommodation and vergence. However, in stereo head-mounted displays these cues are disconnected since the virtual is generally projected at a static distance, while vergence changes with depth. This conflict can result in biased depth estimation of virtual objects in a real environment. In this research, we exam… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 17 publications
0
4
0
Order By: Relevance
“…Using the same experimental setup, McCandless et al [42] additionally studied motion parallax and latency in monocular viewing; they found reduced accuracy with increasing distance and latency. Singh et al [43] found that an occluding surface has complex accuracy effects, and Rosa et al [44] found increased accuracy with redundant tactile feedback.…”
Section: Related Work In Augmented Realitymentioning
confidence: 99%
“…Using the same experimental setup, McCandless et al [42] additionally studied motion parallax and latency in monocular viewing; they found reduced accuracy with increasing distance and latency. Singh et al [43] found that an occluding surface has complex accuracy effects, and Rosa et al [44] found increased accuracy with redundant tactile feedback.…”
Section: Related Work In Augmented Realitymentioning
confidence: 99%
“…In the well-studied audio–visual integration space, visual information is modulated by altering the frequency or localization of seen and heard stimuli ( Rohe and Noppeney, 2018 ), often by employing the established McGurk paradigm ( Gentilucci and Cattaneo, 2005 ). In another example, for visuo–haptic integration studies, visual information is modulated in size estimation or identification tasks through the manipulation of an object’s physical shape ( Yalachkov et al, 2015 ) or the alteration of digital images through augmented ( Rosa et al, 2016 ) and virtual reality (VR) headsets ( Noccaro et al, 2020 ).…”
Section: Expanding Multisensory Integration: Current Tools/methods and Emerging Technologymentioning
confidence: 99%
“…These displays provide a benefit over traditional 2D screens since they are not limited by two-dimensional content. PBDs show 3D images in mid-air, thus allowing depth perception, which could be integrated into traditional experimental paradigms, such as depth discrimination tasks ( Deneve and Pouget, 2004 ; Rosa et al, 2016 ). Furthermore, PBDs also offer a benefit over VR headsets as these novel displays do not require wearing of a head-mounted display (HMD).…”
Section: Expanding Multisensory Integration: Current Tools/methods and Emerging Technologymentioning
confidence: 99%
“…McCandless et al [42] studied motion parallax and latency in monocular viewing and found reduced accuracy with increasing distance and latency. Singh et al [44] found that an occluding surface has complex accuracy effects, and Rosa et al [39] found that accuracy increased when using redundant tactile feedback.…”
Section: Perceptual Issues In Ost Hmdsmentioning
confidence: 99%