2011
DOI: 10.1109/tsmcb.2010.2103557
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Recognition of Non-Acted Affective Postures

Abstract: The conveyance and recognition of affect and emotion partially determine how people interact with others and how they carry out and perform in their day-to-day activities. Hence, it is becoming necessary to endow technology with the ability to recognize users' affective states to increase the technologies' effectiveness. This paper makes three contributions to this research area. First, we demonstrate recognition models that automatically recognize affective states and affective dimensions from non-acted body … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
114
1
1

Year Published

2011
2011
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 146 publications
(119 citation statements)
references
References 46 publications
3
114
1
1
Order By: Relevance
“…In the affective computing field [12], various studies have been carried out to create systems that can recognize the affective states of their user by analyzing their vocal [1], facial [11] [17], and body expressions [4], and even their physiological changes [6]. Most of the work has been carried out on acted or stereotypical expressions.…”
Section: Introductionmentioning
confidence: 99%
“…In the affective computing field [12], various studies have been carried out to create systems that can recognize the affective states of their user by analyzing their vocal [1], facial [11] [17], and body expressions [4], and even their physiological changes [6]. Most of the work has been carried out on acted or stereotypical expressions.…”
Section: Introductionmentioning
confidence: 99%
“…To evaluate the performance of the automatic recognition system, we followed a simplified version of the method proposed in [19]. The evaluation method proposed in [19] requires three groups of observers in order to fully separate the computation of the benchmark from the testing of the system.…”
Section: Discussionmentioning
confidence: 99%
“…Only 36% accuracy was, instead, obtained for 'concentration'. The low accuracy obtained for 'concentration' could be due to the fact that the human observers may have used this label when the avatar's expression did not express any of the other affective states as discussed in [19]. Finally, the column named as 'Multiple classes' in table 3 contains the number of the test samples that our algorithm was not able to categorize into only one class.…”
Section: Low-level Motion Descriptionmentioning
confidence: 99%
See 2 more Smart Citations