Proceedings of the 2017 Conference on Conference Human Information Interaction and Retrieval 2017
DOI: 10.1145/3020165.3020170
|View full text |Cite
|
Sign up to set email alerts
|

SearchGazer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
41
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 93 publications
(41 citation statements)
references
References 30 publications
0
41
0
Order By: Relevance
“…More generally, it is worth noting that the present findings may also be applicable to a number of real-life scenarios or professions that are conceptually similar to the visual search task used in the present study. For instance, one scenario could involve people jointly searching web content while the eye movements of co-actors (tracked via webcams (Papoutsaki et al, 2016)) are displayed on the computer screen. More generally, people frequently perform tasks collaboratively online in remote locations that involve information about a co-actors' actions (e.g., working collaboratively on a manuscript in real-time involves seeing another person's additions and deletions of text).…”
Section: Discussionmentioning
confidence: 99%
“…More generally, it is worth noting that the present findings may also be applicable to a number of real-life scenarios or professions that are conceptually similar to the visual search task used in the present study. For instance, one scenario could involve people jointly searching web content while the eye movements of co-actors (tracked via webcams (Papoutsaki et al, 2016)) are displayed on the computer screen. More generally, people frequently perform tasks collaboratively online in remote locations that involve information about a co-actors' actions (e.g., working collaboratively on a manuscript in real-time involves seeing another person's additions and deletions of text).…”
Section: Discussionmentioning
confidence: 99%
“…One advantage of iCatcher is that it does not require any calibration of participant eye gaze, which is important for its utility in developmental science given that infants sometimes lose interest during the calibration or -even worse -partway through an experiment. Moreover, in several other available automated methods, participants must click on the location of a screen where they are looking, as in Papoutsaki et al (2016); infants and young children are physically unable to perform this task. We also take advantage of real behavioral constraints.…”
Section: Discussionmentioning
confidence: 99%
“…In the case of eye tracking, this would be a distribution of relevant images of participant faces, preferably in annotated format. Many other appearance-based methods also exist, most of which rely on a calibration step by the user to maintain good accuracy and to compensate for not being data-driven (Papoutsaki et al, 2016;Zieliński, n.d.) making them much less useful for developmental or clinical populations. Moreover, these options fall short in terms of robustness relative to the dominant deep-learning-based options.…”
mentioning
confidence: 99%
“…In order to combine both measures, more sophisticated correction of the pupil foreshortening error may be required (Hayes & Petrov, 2016). On the other hand, scanpath length could be conceived as an alternative to pupillary responses in situations of low-quality eye tracking, because recording gaze coordinates likely requires lower camera resolution than pupillary responses and has been reported using consumer-grade cameras (Papoutsaki, Laskey, & Huang, 2017;Papoutsaki et al, 2016). With the resolution of such cameras, typical threat-conditioned PSR are on the order of one pixel and thus possibly not detectable at all.…”
Section: Discussionmentioning
confidence: 99%