2018
DOI: 10.1007/s11412-018-9281-2
|View full text |Cite
|
Sign up to set email alerts
|

Leveraging mobile eye-trackers to capture joint visual attention in co-located collaborative learning groups

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

5
63
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
4

Relationship

0
10

Authors

Journals

citations
Cited by 76 publications
(75 citation statements)
references
References 18 publications
5
63
0
Order By: Relevance
“…First, rather than utilizing the full RQA matrix, which compares each point to all other points and is therefore less theoretically interesting, we restricted our analyses to recurrence points across short time lags. This is consistent with findings that tighter coupling occurs during real‐time interaction, as well as associated research that analyzes time lags up to 2 s in order to examine local coupling patterns (Coco & Dale, ; Richardson & Dale, ; Schneider et al, ). We extend this approach to aRQA and MdRQA by focusing on local dynamics, or time lags occurring between 1 and 2 s apart.…”
Section: Methodssupporting
confidence: 89%
“…First, rather than utilizing the full RQA matrix, which compares each point to all other points and is therefore less theoretically interesting, we restricted our analyses to recurrence points across short time lags. This is consistent with findings that tighter coupling occurs during real‐time interaction, as well as associated research that analyzes time lags up to 2 s in order to examine local coupling patterns (Coco & Dale, ; Richardson & Dale, ; Schneider et al, ). We extend this approach to aRQA and MdRQA by focusing on local dynamics, or time lags occurring between 1 and 2 s apart.…”
Section: Methodssupporting
confidence: 89%
“…In the context of collaborative learning, the dynamic and reciprocal adaptation of shared interaction emerges: that is, when individuals in a group are not only working on the same activity at the same time, but are also all "in tune" mentally (Baker 2002;Popov et al 2017). Therefore, synchronicity between individuals in collaborative learning can also be seen in gazes (Schneider and Pea 2013), joint visual attention (Schneider et al 2018), and physiology (Ahonen et al 2018;Gillies et al 2016).…”
Section: Collaborative Learning and Physiological Synchronymentioning
confidence: 99%
“…Another approach, multimodal learning analytics (MMLA), seeks to capture and synthesize some combination of all three types of learning data: behavioral, physiological and representational, towards the goal of developing comprehensive models of student learning (Worsely et al, 2016). Recent studies related to colocated, computer-based, collaborative problem solving (CPS) using the MMLA approach, focus on understanding how gaze, gesture and physical actions in the computer environment predict group success (Cukurova, Luckin, Millán, & Mavrikis, 2018;Schneider & Blikstein, 2015;Schneider, Sharma, Cuendet, Aufferey, Dillenbourg, & Pea, 2018;Spikol, Ruffaldi, Landolfi, & Cukurova, 2017). These studies have demonstrated the utility of these types of data as a means of convergent triangulation in correlating non-verbal elements with learning gains.…”
Section: Learning Analyticsmentioning
confidence: 99%