2021
DOI: 10.3389/frvir.2021.697367
|View full text |Cite
|
Sign up to set email alerts
|

Eye See What You See: Exploring How Bi-Directional Augmented Reality Gaze Visualisation Influences Co-Located Symmetric Collaboration

Abstract: Gaze is one of the predominant communication cues and can provide valuable implicit information such as intention or focus when performing collaborative tasks. However, little research has been done on how virtual gaze cues combining spatial and temporal characteristics impact real-life physical tasks during face to face collaboration. In this study, we explore the effect of showing joint gaze interaction in an Augmented Reality (AR) interface by evaluating three bi-directional collaborative (BDC) gaze visuali… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 32 publications
(5 citation statements)
references
References 35 publications
1
4
0
Order By: Relevance
“…The participants could interact seamlessly with remote partners through pseudo‐eye contact. Similar to previous studies, involving codriver (Maurer et al, 2014) and visual searching and matching tasks (Jing et al, 2021), the current tasks requiring coordinating spatial referencing and attention benefit from this shared gaze.…”
Section: Discussionsupporting
confidence: 74%
“…The participants could interact seamlessly with remote partners through pseudo‐eye contact. Similar to previous studies, involving codriver (Maurer et al, 2014) and visual searching and matching tasks (Jing et al, 2021), the current tasks requiring coordinating spatial referencing and attention benefit from this shared gaze.…”
Section: Discussionsupporting
confidence: 74%
“…Therefore, measuring cognitive load or work load in HCI research is popular. For example, Jing et al (2021), adopted SMEQ questionnaire (Sauro and Dumas, 2009) to measure workload in evaluating three bidirectional collaborative gaze visualizations with three levels of gaze behaviors for co-located collaboration. The cognitive load in their research task is extraneous cognitive load as the researchers manipulated the manner of communicating cues.…”
Section: Cognitive Load Theorymentioning
confidence: 99%
“…In another study, Jing et al compared different types of gaze visualizations and the effect of gaze behaviours in a co-located environment with AR head-mounted displays and showed that gaze markers are helpful indicators for intentions and joint attention [31]. Erickson et al analyzed the effectiveness of gaze rays on target identification with a simulated gaze and examined the error types [32].…”
Section: Communication and Collaborationmentioning
confidence: 99%