Proceedings of the Eleventh International Conference on Tangible, Embedded, and Embodied Interaction 2017
DOI: 10.1145/3024969.3025010
|View full text |Cite
|
Sign up to set email alerts
|

LaserViz

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(3 citation statements)
references
References 23 publications
0
3
0
Order By: Relevance
“…Gaze input can also be a particularly useful addition to collaborative settings. Studies by Zhang et al [52], van Rheden et al [42], and Pfeuffer et al [27] used gaze tracking to show that all collaborators recognize at which location a user is looking, thus increasing the awareness of collaborators. All these interaction concepts can be applied in an ad-hoc mobile cross-device setting; however, so far they have required either extensive calibration, used specific hardware, or needed a controlled lab environment to track gaze.…”
Section: Gaze Interactionsmentioning
confidence: 99%
“…Gaze input can also be a particularly useful addition to collaborative settings. Studies by Zhang et al [52], van Rheden et al [42], and Pfeuffer et al [27] used gaze tracking to show that all collaborators recognize at which location a user is looking, thus increasing the awareness of collaborators. All these interaction concepts can be applied in an ad-hoc mobile cross-device setting; however, so far they have required either extensive calibration, used specific hardware, or needed a controlled lab environment to track gaze.…”
Section: Gaze Interactionsmentioning
confidence: 99%
“…JVA features have been introduced to a broad range of applications, including collaborative search (Brennan et al, 2008), mediated interaction (Bente et al, 2007), infant-caregiver interaction (Markus et al, 2000) and training for children with autism (Whalen & Schreibman, 2003). Interest has grown in the use of synchronised eye-trackers to quantitatively measure gaze alignment in various collaborative situations (Bryant et al, 2019;Huang et al, 2019;Kim et al, 2020;Van Rheden et al, 2017). However, there are challenges in using eye-tracking sensors, including the high cost of the devices, and restricted environmental and calibration settings (e.g., the camera should be precisely in front of the student within a close distance and on top of a specific panel (Huang et al, 2019)).…”
Section: Jva Applicationsmentioning
confidence: 99%
“…Gaze-oriented cues can be used as a means of obtaining information about the cognitive activities of a collaborator, and there is evidence that students look at and point to the same object during collaborative co-located learning activities (Schneider & Pea, 2013;Schneider et al, 2018). This gaze alignment is called joint visual attention (JVA; Van Rheden et al, 2017) -see Figure 1, for example. JVA is a strong predictor of successful collaboration among students (Pietinen et al, 2010;van der Meulen et al, 2016).…”
Section: Introductionmentioning
confidence: 99%