2016
DOI: 10.1007/978-3-319-48746-5_35
|View full text |Cite
|
Sign up to set email alerts
|

Sensing Affective States Using Facial Expression Analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0
1

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 28 publications
0
7
0
1
Order By: Relevance
“…A common classification approach is based on distances and angles of landmarks. Samara et al [24] use the Euclidean International Journal of Computer Games Technology distance among face points to train a Support Vector Machine (SVM) model to detect expressions. Similarly Chang et al [25] use 12 distances calculated from 14 landmarks to detect fear, love, joy, and surprise.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…A common classification approach is based on distances and angles of landmarks. Samara et al [24] use the Euclidean International Journal of Computer Games Technology distance among face points to train a Support Vector Machine (SVM) model to detect expressions. Similarly Chang et al [25] use 12 distances calculated from 14 landmarks to detect fear, love, joy, and surprise.…”
Section: Related Workmentioning
confidence: 99%
“…Previous work focuses on detecting facial expressions per se, including the six universal facial expressions of emotion, typically reporting accuracy rates of machine learning models used to detect those predefined facial expressions. A significant number of those approaches train the models using datasets with images and videos of actors performing facial expressions [24,[26][27][28]46], subjects watching video clips [25,29,38], or subjects undergoing social exposure [38]. As previously mentioned, those are artificial situations that are significantly different from an interaction with a game.…”
Section: Comparison With Previous Workmentioning
confidence: 99%
“…However, one can view these percentages differently by considering the fact that some expressions are much more precisely recognised than others. Generally, detecting states such as happy and surprise is comparably superior than detecting other states such as contempt, neutral, fear, angry, sadness and disgust, which is possibly due to the similarity in the geometric shape of these expressions (Samara et al 2016). Moreover, the work presented in (Joho et al 2009) underlined this type of grouping, by devising the pronounce level of the associated expressions, where these expressions belong to a low pronounced level.…”
Section: Expression Classification During Humancomputer Interactionmentioning
confidence: 99%
“…Subsequently, resultant features are represented by finding the Euclidean distances among all facial landmark points. Consequently, the facial expression is finally represented as a 1176-dimension feature vector, resulting from 49 Cartesian coordinate combinations (Samara et al 2016). …”
Section: Feature Extraction and Distance-based Representationmentioning
confidence: 99%
“…However, recognising User Perplexity and Confusion is still non-trivial and a challenging objective, particularly in the context of HCI. Additionally, the complexity of human emotions and the peculiarity of the relationship between humans and machines provides a challenge in Affective Computing and relevant research themes (Samara et al, 2016(Samara et al, , 2017.…”
Section: User Perplexity In Human-computer Interactionmentioning
confidence: 99%