2013
DOI: 10.1007/s11263-013-0691-3
|View full text |Cite
|
Sign up to set email alerts
|

Visual Focus of Attention in Non-calibrated Environments using Gaze Estimation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
25
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 49 publications
(25 citation statements)
references
References 38 publications
0
25
0
Order By: Relevance
“…This information can be measured by the estimation of someone's gaze or face orientation. Several studies show how to determine the face's orientation to estimate the focus of attention in several contexts: in front of a computer screen [3], in interaction with an ECA [11,7] or a mobile robot [21,4]. In [13], the user's attention is estimated using several features: position and posture, face orientation, proximity from the system and smile detection.…”
Section: Engagement In Human-machine Interactionmentioning
confidence: 99%
“…This information can be measured by the estimation of someone's gaze or face orientation. Several studies show how to determine the face's orientation to estimate the focus of attention in several contexts: in front of a computer screen [3], in interaction with an ECA [11,7] or a mobile robot [21,4]. In [13], the user's attention is estimated using several features: position and posture, face orientation, proximity from the system and smile detection.…”
Section: Engagement In Human-machine Interactionmentioning
confidence: 99%
“…Our method achieves 100% robustness and 3.9°aver-age accuracy, E m , which outperforms [7,8]. It has mean error E m similar to [25,29,31], but worse than [4,20,35,40,41]. With the use of the synthetic appearance model, the result is promising.…”
Section: Resultsmentioning
confidence: 83%
“…Recently, the cascaded regression has shown very impressive results in face alignment, such as [6,18,27,41], however, these method are developed for near frontal face between ± 45°of Yaw rotation. Asteriadis et al [4] proposed the combination of traditional tracking techniques and deep learning to provide a proficient pose tracking. Many commercial products also exist, e.g.…”
Section: Introductionmentioning
confidence: 99%
“…This set of scenes is then analyzed using software that calculates the visual saliency or basic attractiveness in terms of attention of each area in the scene. The areas of high and low visual saliency are mapped across the scenes, allowing for a ready assessment of which features or areas are particularly visually attractive and which are less likely to draw attention (following the kind of approach used in, e.g., McNamara et al 2014; Baluch and Itti 2015; Asteriadis et al 2014 ). This set of saliency maps, together with the corresponding images of what is visible in each moment, form the dataset from which further inferences are made.…”
Section: Aq5mentioning
confidence: 99%