2022
DOI: 10.1609/aaai.v36i1.19873
|View full text |Cite
|
Sign up to set email alerts
|

Learning Unseen Emotions from Gestures via Semantically-Conditioned Zero-Shot Perception with Adversarial Autoencoders

Abstract: We present a novel generalized zero-shot algorithm to recognize perceived emotions from gestures. Our task is to map gestures to novel emotion categories not encountered in training. We introduce an adversarial autoencoder-based representation learning that correlates 3D motion-captured gesture sequences with the vectorized representation of the natural-language perceived emotion terms using word2vec embeddings. The language-semantic embedding provides a representation of the emotion label space, and we levera… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 49 publications
0
1
0
Order By: Relevance
“…To increase the size of the training set, the authors used a conditional variational autoencoder (CVAE) to generate some synthetic data. Banerjee et al [ 295 ] combined GCN and NLP techniques to achieve zero-shot emotion recognition, which entailed recognizing novel emotion categories not seen during training. The authors used ST-GCN to extract visual features from the 3-D pose sequences and used the word2vec method to obtain word embeddings from emotion labels.…”
Section: Emotion Recognition: Key Ideas and Systemsmentioning
confidence: 99%
“…To increase the size of the training set, the authors used a conditional variational autoencoder (CVAE) to generate some synthetic data. Banerjee et al [ 295 ] combined GCN and NLP techniques to achieve zero-shot emotion recognition, which entailed recognizing novel emotion categories not seen during training. The authors used ST-GCN to extract visual features from the 3-D pose sequences and used the word2vec method to obtain word embeddings from emotion labels.…”
Section: Emotion Recognition: Key Ideas and Systemsmentioning
confidence: 99%