Proceedings of the 2nd on Multimodal Sentiment Analysis Challenge 2021
DOI: 10.1145/3475957.3484448
|View full text |Cite
|
Sign up to set email alerts
|

Fusion of Acoustic and Linguistic Information using Supervised Autoencoder for Improved Emotion Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 23 publications
0
3
0
Order By: Relevance
“…More popular method, however, is bag-of-words with subsequent tf-idf weighting [12], which is based on broad language traits. In line with previous studies, we later use tf-idf characteristics as our baseline and machine learning models [13].…”
Section: Related Workmentioning
confidence: 99%
“…More popular method, however, is bag-of-words with subsequent tf-idf weighting [12], which is based on broad language traits. In line with previous studies, we later use tf-idf characteristics as our baseline and machine learning models [13].…”
Section: Related Workmentioning
confidence: 99%
“…Using the openSMILE toolkit [25] 5 , we extract 88 extended Geneva Minimalistic Acoustic Parameter Set (eGeMAPS) features [24] which have shown their suitability for sentiment analysis and SER tasks [8,59]. For each sub-challenge, we apply the standard configuration and extract features with a window size of two seconds, and a hop size of 500 ms.…”
Section: Egemapsmentioning
confidence: 99%
“…SMILE toolkit [29] to extract 88 dimensional extended Geneva Minimalistic Acoustic Parameter Set ( G MAPS) features [28], which have shown to be robust for sentiment analysis and SER tasks [10,50,62,66]. We employ the standard configuration for each sub-challenge and extract features using a window size of 2000 ms and a hop size of 500 ms.…”
Section: G Maps We Utilize Thementioning
confidence: 99%