Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society 2019
DOI: 10.1145/3306618.3314284
|View full text |Cite
|
Sign up to set email alerts
|

A Comparative Analysis of Emotion-Detecting AI Systems with Respect to Algorithm Performance and Dataset Diversity

Abstract: In recent news, organizations have been considering the use of facial and emotion recognition for applications involving youth such as tackling surveillance and security in schools. However, the majority of efforts on facial emotion recognition research have focused on adults. Children, particularly in their early years, have been shown to express emotions quite differently than adults. Thus, before such algorithms are deployed in environments that impact the wellbeing and circumstance of youth, a careful exam… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 73 publications
(6 citation statements)
references
References 21 publications
0
6
0
Order By: Relevance
“…At this point let us mention that each task and corresponding database contains ambiguous cases: i) there is generally discrepancy in the perception of the disgust, fear, sadness and (negative) surprise emotions across different people (of different ethnicity, race and age) and across databases; emotional displays and their perception are not universal, i.e., facial expressions are displayed and interpreted differently depending on the cultural background of subjects and annotators [59], [60] ; ii) the exact valence and arousal value for a particular affect is also not consistent among databases; iii) the AU annotation process is hard to do and error prone, creating incosistency among databases (e.g., regarding the dependencies among AUs, such as AU12 -lip corner puller-and AU15 -lip corner depressor-that cannot co-occur as their corresponding muscle groups -Zgomaticus major and Depressor anguli oris, respectively-are unlikely to be simultaneously activated).…”
Section: Results: Generalisation To Unseen Databasesmentioning
confidence: 99%
“…At this point let us mention that each task and corresponding database contains ambiguous cases: i) there is generally discrepancy in the perception of the disgust, fear, sadness and (negative) surprise emotions across different people (of different ethnicity, race and age) and across databases; emotional displays and their perception are not universal, i.e., facial expressions are displayed and interpreted differently depending on the cultural background of subjects and annotators [59], [60] ; ii) the exact valence and arousal value for a particular affect is also not consistent among databases; iii) the AU annotation process is hard to do and error prone, creating incosistency among databases (e.g., regarding the dependencies among AUs, such as AU12 -lip corner puller-and AU15 -lip corner depressor-that cannot co-occur as their corresponding muscle groups -Zgomaticus major and Depressor anguli oris, respectively-are unlikely to be simultaneously activated).…”
Section: Results: Generalisation To Unseen Databasesmentioning
confidence: 99%
“…This technology can detect eight emotions based on Ekman’s FACS (anger, contempt, disgust, fear, happiness, neutral, sadness, and surprise) from faces on a given photo and assign a score to the emotional categories for each detected face so that the sum of the eight scores will be one (Kim & Kim, 2018). The analysis of still images consists in finding the face in the image, extracting the relevant features (facial action units, AUs), and finally classifying the image using algorithms trained through machine learning techniques (Bryant & Howard, 2019). The emotion detection is performed after detecting the face.…”
Section: Methodsmentioning
confidence: 99%
“…This field of human-computer interaction and affective computing focus on emotion recognition is a new frontier that could have relevant consequences in health care or education. The algorithm uses facial detection and semantic analysis to interpret mood from photos and videos both static and real time (Deshmukh & Jagtap, 2017). Technology that reveals human feelings could be used to identify students in trouble in a classroom environment, help autistics better interact with others, and encourage better relationships based on empathy and understanding.…”
Section: Research-article20202020mentioning
confidence: 99%
“…Machine vision and natural language processing technology is used with ML to "identify" emotions so that applications can respond to a user in real time. There has been critique of this on several grounds, including: the inability of ML mathematical models to grasp the cultural and contextual specificity of emotional facial expressions (Barrett et al, 2019); the racially biased labeling of data sets used in supervised learning to train AI (Rhue, 2018); the lack of representation of children's data that leads to poor prediction outcomes (Bryant & Howard, 2019); and the ethically dubious use of technology to manipulate or "nudge" people's behavior without their knowledge or consent. Moreover, there are persuasive arguments that the use of biometric technology in EdTech is currently without evidence of efficacy for learning, and that it contravenes children's human rights (McStay, 2019).…”
Section: How Is Ai Used In Education?mentioning
confidence: 99%