2020
DOI: 10.1007/978-3-030-58607-2_9
|View full text |Cite
|
Sign up to set email alerts
|

Take an Emotion Walk: Perceiving Emotions from Gaits Using Hierarchical Attention Pooling and Affective Mapping

Abstract: We present an autoencoder-based semi-supervised approach to classify perceived human emotions from walking styles obtained from videos or from motion-captured data and represented as sequences of 3D poses. Given the motion on each joint in the pose at each time step extracted from 3D pose sequences, we hierarchically pool these joint motions in a bottom-up manner in the encoder, following the kinematic chains in the human body. We also constrain the latent embeddings of the encoder to contain the space of psyc… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
43
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 40 publications
(43 citation statements)
references
References 70 publications
0
43
0
Order By: Relevance
“…This section describes the numerous experiments that were run to tune the hyperparameters of the network as well as to train and test the proposed network. All the experiments discussed in this paper were run with a data split of 80:10:10 for training, validation, and testing using a stratified shuffling on the Edinburgh Locomotion Mocap Dataset (ELMD), which was collected by researchers from the University of Edinburgh [ 43 ] and annotated by Bhattacharya et al [ 37 ]. The modified ELMD dataset consists of 1835 gait sequences recorded for 4 s at 60 Hz.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…This section describes the numerous experiments that were run to tune the hyperparameters of the network as well as to train and test the proposed network. All the experiments discussed in this paper were run with a data split of 80:10:10 for training, validation, and testing using a stratified shuffling on the Edinburgh Locomotion Mocap Dataset (ELMD), which was collected by researchers from the University of Edinburgh [ 43 ] and annotated by Bhattacharya et al [ 37 ]. The modified ELMD dataset consists of 1835 gait sequences recorded for 4 s at 60 Hz.…”
Section: Resultsmentioning
confidence: 99%
“…Another widely adapted methodology for processing sequential gait data is the use of Recurrent Neural Networks (RNN) [ 37 , 38 ]. These works model the 2D or 3D gait sequences as a collection of body joints as a frame and the collection of the frames as the entire sequence.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…These affective features can be observed at different scales: they can be localized joint movements such as rapid arm swings and head jerks, indicating excitement or anger, as well as macroscopic body movements such as the upper body being expanded, indicating pride or confidence, or collapsed, indicating shame or nervousness. Subsequently, there has been work on detecting perceived emotions by leveraging known affective features either as input to a neural network [6] or to constrain the embedding space [9]. In contrast, we design our neural network to explicitly attend to the body movements at these multiple scales to learn latent affective features directly from the input gesture samples.…”
Section: Perceiving Affective Body Expressionsmentioning
confidence: 99%
“…Each affective expression is a combination of one or more affective features, e.g., rapid arm swings and head jerks are often used as expressions of anger or excitement [26]. A multitude of macroscopic and microscopic factors influence the affective features in a given context, including the social setting and the speaker's idiosyncrasies, making an exhaustive enumeration of affective features tedious and challenging [9]. Nevertheless, it is essential to learn these affective features to understand and synthesize the desired affective expressions.…”
Section: Introductionmentioning
confidence: 99%