Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication 2014
DOI: 10.1145/2638728.2638783
|View full text |Cite
|
Sign up to set email alerts
|

Implicit gaze based annotations to support second language learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(7 citation statements)
references
References 6 publications
0
7
0
Order By: Relevance
“…In their system, a long dwell time was chosen to minimise the 'Midas touch' effect when employing gaze as an input modality [30]. Similarly, implicit annotation of documents has been explored by employing the user's reading behaviour to distinguish the areas of the document that had been thoroughly read or just skimmed [11,39,58]. For instance, Okoso et al employed eye-tracking to propose a feedback mechanism for second language readers.…”
Section: Document Annotation Using Gaze and Voicementioning
confidence: 99%
See 1 more Smart Citation
“…In their system, a long dwell time was chosen to minimise the 'Midas touch' effect when employing gaze as an input modality [30]. Similarly, implicit annotation of documents has been explored by employing the user's reading behaviour to distinguish the areas of the document that had been thoroughly read or just skimmed [11,39,58]. For instance, Okoso et al employed eye-tracking to propose a feedback mechanism for second language readers.…”
Section: Document Annotation Using Gaze and Voicementioning
confidence: 99%
“…This was done by implicitly annotating the document with reading features such as the reading speed, amount of re-reading and number of fixations. [39].…”
Section: Document Annotation Using Gaze and Voicementioning
confidence: 99%
“…Thus, stored annotations can be shared among multiple learners through this file accessible via the network. Eye-Gaze [5], ASRLM [16], Livenotes [14], and A.nnotate [44] are based on the model 'Common Annotation Framework' expressed in Resource Description Framework (RDF) [55] and interpreted by associated server. Although, the storage of annotations outside of annotated resource is rather complex.…”
Section: B To Store Annotationsmentioning
confidence: 99%
“…For example, a feature designed to help the learner to enrich his knowledge about the studied domain is presented by EyeGaze [5], ASRLM [16], and Crocodoc [46]. These annotation systems identify a list of documents by making inferences from the annotated passages and other documents in database annotations.…”
Section: G To Recommend Data Related To Annotationsmentioning
confidence: 99%
“…Thus, it can be hard to tell which word a fixation belongs to (see also Figure 3). Previous work showed that gaze data can be robustly associated with a paragraph, since spaces between paragraphs are relatively large in a normal document [8]. Therefore, we consider that a paragraph-level is the smallest unit which can contain document features.…”
Section: Segment Level Gaze Featurementioning
confidence: 99%