Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems 2019
DOI: 10.1145/3290605.3300431
|View full text |Cite
|
Sign up to set email alerts
|

Mixed Reality Remote Collaboration Combining 360 Video and 3D Reconstruction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
53
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 147 publications
(53 citation statements)
references
References 27 publications
0
53
0
Order By: Relevance
“…Nonverbal cues are often missing in remote collaboration solutions in XR. They are important since they can affect user performance even more than lack of verbal communication (Teo et al 2019). (Jo et al 2017) found that presenting avatars in a real/AR background helped to improve trust in collaboration and co-presence.…”
Section: Related Workmentioning
confidence: 99%
“…Nonverbal cues are often missing in remote collaboration solutions in XR. They are important since they can affect user performance even more than lack of verbal communication (Teo et al 2019). (Jo et al 2017) found that presenting avatars in a real/AR background helped to improve trust in collaboration and co-presence.…”
Section: Related Workmentioning
confidence: 99%
“…This is especially important in situations where contextual information is required to provide a rich understanding of the environment in order for people to communicate and interact with both other users and the environment itself. However, this can be especially challenging for the virtual environment that recreates real physical spaces using depthcameras, as it may limit the sense of the presence of the user [62].…”
Section: Motivationmentioning
confidence: 99%
“…These are not trivial challenges, and proposed solutions are largely experimental [122,123]. Ideally, in a collaborative XR, people should (1) experience the presence of others (e.g., using avatars of the full body, or parts of the body such as hands) [124,125]; (2) be able to detect the gaze direction of others [126], and eventually, experience 'eye contact' [127]; (3) have on-demand access to what the others see ('shared field of view') [128,129]; (4) be able to share spatial context [123], especially in the case of remote collaboration (i.e., does it 'rain or shine' in one person's location, are they on the move, is it dark or light, are they looking at a water body? ); (5) be able to use virtual gestures (handshake, wave, nod, other nonverbal communication) [129,130]; (6) be able to add proper annotations to scenes and objects and see others' annotations; and last but not least (7), be able to 'read' the emotional reactions of their collaboration partner [131].…”
Section: Interaction Designmentioning
confidence: 99%