2020
DOI: 10.1038/s41467-020-19712-x
|View full text |Cite
|
Sign up to set email alerts
|

Detection of eye contact with deep neural networks is as accurate as human experts

Abstract: Eye contact is among the most primary means of social communication used by humans. Quantification of eye contact is valuable as a part of the analysis of social roles and communication skills, and for clinical screening. Estimating a subject’s looking direction is a challenging task, but eye contact can be effectively captured by a wearable point-of-view camera which provides a unique viewpoint. While moments of eye contact from this viewpoint can be hand-coded, such a process tends to be laborious and subjec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
26
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 36 publications
(27 citation statements)
references
References 61 publications
0
26
0
Order By: Relevance
“…In this section, the mutual gaze classifier is compared with the solution proposed in Chong et al (2020) . To the best of our knowledge, this is the most recent solution in the current literature that best adapts to our purposes.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…In this section, the mutual gaze classifier is compared with the solution proposed in Chong et al (2020) . To the best of our knowledge, this is the most recent solution in the current literature that best adapts to our purposes.…”
Section: Resultsmentioning
confidence: 99%
“…To the best of our knowledge, this is the most recent solution in the current literature that best adapts to our purposes. In Chong et al (2020) , the authors trained a deep convolution neural network (i.e., ResNet-50 ( He et al, 2016 )) as the backbone to automatically detect eye contact during face-to-face interactions. As network performance, the authors reported an overall precision of 0.94 and an F1-score of 0.94 on 18 validation subjects.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…For a social robot, first-person videos can be captured by a camera embedded in the eye-pupil or the forehead. From these videos, face-to-face postures (distances or angles) or eye contact states can be obtained using facial detection and/or machine learning techniques ( Chong et al, 2020 ; Mitsuzumi et al, 2017 ). Similar systems such as wearable eye trackers ( Chong et al, 2017 ) or proximity sensors ( Hachisu et al, 2018 ) can be used for face-to-face interaction analysis as well.…”
Section: Technical Challenges Toward Artificial Systems That Incorporate Humanitude Techniquesmentioning
confidence: 99%