2019
DOI: 10.1007/978-3-030-11024-6_13
|View full text |Cite
|
Sign up to set email alerts
|

Inferring Human Knowledgeability from Eye Gaze in Mobile Learning Environments

Abstract: What people look at during a visual task reflects an interplay between ocular motor functions and cognitive processes. In this paper, we study the links between eye gaze and cognitive states to investigate whether eye gaze reveal information about an individual's knowledgeability. We focus on a mobile learning scenario where a user and a virtual agent play a quiz game using a hand-held mobile device. To the best of our knowledge, this is the first attempt to predict user's knowledgeability from eye gaze using … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
4
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 22 publications
1
4
0
Order By: Relevance
“…As we can see, both streams of information contribute to the performance of DecNet Comb, with head pose data being able to achieve better performance when considered alone. The performance of DecNet Gaze on the audio task, however, are not surprising, as they are comparable to previous studies on knowledgeability anticipation from gaze information alone [37]. On the visual task, instead, we can see that gaze data is more relevant to the final F 1 -Score performance of DecNet, as it identifies the main resource used by the participants to capture the information required to complete the task.…”
Section: A Classification Performancesupporting
confidence: 83%
See 1 more Smart Citation
“…As we can see, both streams of information contribute to the performance of DecNet Comb, with head pose data being able to achieve better performance when considered alone. The performance of DecNet Gaze on the audio task, however, are not surprising, as they are comparable to previous studies on knowledgeability anticipation from gaze information alone [37]. On the visual task, instead, we can see that gaze data is more relevant to the final F 1 -Score performance of DecNet, as it identifies the main resource used by the participants to capture the information required to complete the task.…”
Section: A Classification Performancesupporting
confidence: 83%
“…Gaze patterns have been widely used in cognitive humanmachine interaction. For example, [36] used gaze patterns to infer a user's level of domain knowledge in the domain of genomics, while [37] focused on knowledgeability prediction using a noninvasive eye-tracking method on mobile devices with Support Vector Machines (SVMs).…”
Section: B Gaze Patterns In Cognitive Human-machine Interactionmentioning
confidence: 99%
“…Imagine for example a teaching assistance system, which, to provide optimal support for a student, must be able to assess whether a change in task demand, e.g., increasing the level of difficulty, is appropriate or overextending for the student. Only then the system can adjust to the right level of information supply or offer additional support for solving the task (see, e.g., [ 71 ]). For such an assessment of the human state, gaze behavior has been suggested as a rich data source, whose analysis can provide unobtrusive insights into a user’s cognitive or emotional state (e.g., [ 5 , 72 , 73 ]).…”
Section: Discussionmentioning
confidence: 99%
“…Imagine for example a teaching assistance system, which in order to provide optimal support for a student, must be able to assess whether a change in task demand, e.g., increasing the level of difficulty, is appropriate or overextending for the student. Only then the system can adjust to the right level of information supply or offer additional support for solving the task (see for example [6]). For such an assessment of the human state, gaze behavior has been suggested as a rich data source, whose analysis can provide unobtrusive insights into a user's cognitive or emotional state (e.g.…”
Section: Discussionmentioning
confidence: 99%