2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA) 2018
DOI: 10.1109/icmla.2018.00085
|View full text |Cite
|
Sign up to set email alerts
|

Classification of Eye Tracking Data Using a Convolutional Neural Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 18 publications
(9 citation statements)
references
References 9 publications
0
9
0
Order By: Relevance
“…Most of these studies do not focus on differentiating mental states from the data but rather improving the gaze estimation itself, unsupervised feature extractions, or predictions about the demographics of the participants. The use cases for the applications are many-fold, such as websites (Yin et al, 2018) or Augmented and Virtual Reality (Lemley et al, 2018).…”
Section: Related Work On Deep Learning For Eye Trackingmentioning
confidence: 99%
“…Most of these studies do not focus on differentiating mental states from the data but rather improving the gaze estimation itself, unsupervised feature extractions, or predictions about the demographics of the participants. The use cases for the applications are many-fold, such as websites (Yin et al, 2018) or Augmented and Virtual Reality (Lemley et al, 2018).…”
Section: Related Work On Deep Learning For Eye Trackingmentioning
confidence: 99%
“…In [57], a modified LeNet5 CNN model combined with feature engineering model was used to determine whether a user was interacting a particular interface (Google News or NewsMap) to answer questions about current events. The resulting grayscale images were fed to train the CNN model and perform two classification tasks to identify web user interfaces and nationalities of users.…”
Section: Related Workmentioning
confidence: 99%
“…The eye tracking data were sliced using a 10-s time window where every 10 seconds of gaze points were grouped for the three different information presentation methods. We chose a time window of 10 s to keep this study consistent with our previous study [57]. Each 10-s gaze point image was represented by a 2D array with a size of 1440 × 900 -which corresponds directly to the screen resolution of the computer used for the study.…”
Section: Data Preprocessingmentioning
confidence: 99%
See 2 more Smart Citations