Proceedings of the Sixth International Conference on Technological Ecosystems for Enhancing Multiculturality 2018
DOI: 10.1145/3284179.3284230
|View full text |Cite
|
Sign up to set email alerts
|

A comparison of students' emotional self-reports with automated facial emotion recognition in a reading situation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 18 publications
0
2
0
Order By: Relevance
“…To answer the first research question, the high degree of consistency between the emotional self-reports and the automatic system results, around 70%, in contrast with Hirt et al (2018), suggests that the software is reasonably reliable to determine the emotional valence of the students in the learning context. However, only 60% of the video material collected is recognized and processed by the emotional valence recognition software, which emphasizes the need for adequate framed facial images for the AI system to work (Grm et al, 2018).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…To answer the first research question, the high degree of consistency between the emotional self-reports and the automatic system results, around 70%, in contrast with Hirt et al (2018), suggests that the software is reasonably reliable to determine the emotional valence of the students in the learning context. However, only 60% of the video material collected is recognized and processed by the emotional valence recognition software, which emphasizes the need for adequate framed facial images for the AI system to work (Grm et al, 2018).…”
Section: Discussionmentioning
confidence: 99%
“…Furthermore, learning contexts involve complexity that does not always allow for the adequate recording of participants' faces, which can limit the reliability of such systems (Hirt et al, 2018). The use of hybrid convolutional neural networks, combining facial expressions with hand gestures and body postures, can minimize this issue (Ashwin & Ram Mohana Reddy, 2020), although the correct interpretation of an emotional expression, as seen before, poses a complex process involving multiple parameters.…”
Section: Introductionmentioning
confidence: 99%