Proceedings of the 18th ACM International Conference on Multimodal Interaction 2016
DOI: 10.1145/2993148.2993202
|View full text |Cite
|
Sign up to set email alerts
|

Using touchscreen interaction data to predict cognitive workload

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
19
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(21 citation statements)
references
References 14 publications
2
19
0
Order By: Relevance
“…Further investigation into how these different modalities were used revealed that logs and videos were mostly used to evaluate the performance of the participants, either in a quantitative way using logs (Liu, Stamper, & Davenport, 2018 ;Liu et al ., 2019 ;Mock et al ., 2016 ;Sharma, Papamitsiou, et al ., 2019 ) or environment variables , or in a qualitative manner using videos (Worsley & Blikstein, 2015, 2018. The rest of the multimodal sources were used to quantify behavioral trajectories, such as interaction behavior using touch gestures (Mock et al ., 2016 ), engagement with problem space using EDA and audio (flow, stress; Worsley & Blikstein, 2015, 2018, understanding and misconceptions using physiological data (Liu et al ., 2018(Liu et al ., , 2019, and problem-solving behavior using faces and EEG (Sharma, Papamitsiou, et al ., 2019 ) and eye-tracking .…”
Section: Data Collection Sample Size and Methodologymentioning
confidence: 99%
See 3 more Smart Citations
“…Further investigation into how these different modalities were used revealed that logs and videos were mostly used to evaluate the performance of the participants, either in a quantitative way using logs (Liu, Stamper, & Davenport, 2018 ;Liu et al ., 2019 ;Mock et al ., 2016 ;Sharma, Papamitsiou, et al ., 2019 ) or environment variables , or in a qualitative manner using videos (Worsley & Blikstein, 2015, 2018. The rest of the multimodal sources were used to quantify behavioral trajectories, such as interaction behavior using touch gestures (Mock et al ., 2016 ), engagement with problem space using EDA and audio (flow, stress; Worsley & Blikstein, 2015, 2018, understanding and misconceptions using physiological data (Liu et al ., 2018(Liu et al ., , 2019, and problem-solving behavior using faces and EEG (Sharma, Papamitsiou, et al ., 2019 ) and eye-tracking .…”
Section: Data Collection Sample Size and Methodologymentioning
confidence: 99%
“…2018used skeleton positions and kinematics features to predict participants' recall; Andrade, Danish, and Maltese ( 2017 ) used hand positions and head poses to predict students' understanding in a predator-prey simulation; Spikol et al . ( 2018 ) used distance between students' faces, hand motion speed and the distance between hands to predict quality of students' projects; and Mock et al . ( 2016 ) used interaction logs from a touchscreen and the hand movements to predict the cognitive workload of the students.…”
Section: For Learning Behavior and Performancementioning
confidence: 99%
See 2 more Smart Citations
“…MMD has been used to predict performance and engagement in educational contexts in previous research [3], [18], [42]. These contexts vary from games [18], [25], to assessment systems [40], to adaptive systems [32], to collaborative systems [24]. However, one common factor in these studies is the use of multiple data streams (e.g., gaze, facial expressions, Electroencephalography (EEG), heart rate, log data) to predict and explain learning performance [18], [40], behaviour [3], [42] or experience [37], [38].…”
Section: B Multi-modal Data-based Predictions In Educationmentioning
confidence: 99%