2022
DOI: 10.3390/app12041902
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Identification of Emotional Information in Spanish TV Debates and Human–Machine Interactions

Abstract: Automatic emotion detection is a very attractive field of research that can help build more natural human–machine interaction systems. However, several issues arise when real scenarios are considered, such as the tendency toward neutrality, which makes it difficult to obtain balanced datasets, or the lack of standards for the annotation of emotional categories. Moreover, the intrinsic subjectivity of emotional information increases the difficulty of obtaining valuable data to train machine learning-based algor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 69 publications
(73 reference statements)
0
2
0
Order By: Relevance
“…For future work it would be interesting to explore other classification architectures and label more data to improve the results and make it possible to learn representations for more classes. For example, it would be interesting to have more data of the Enthusiastic class, since, as seen in Figure 1 and in [12], it is quite distinguishable from other emotions in our corpus.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…For future work it would be interesting to explore other classification architectures and label more data to improve the results and make it possible to learn representations for more classes. For example, it would be interesting to have more data of the Enthusiastic class, since, as seen in Figure 1 and in [12], it is quite distinguishable from other emotions in our corpus.…”
Section: Discussionmentioning
confidence: 99%
“…In contrast with previous research [12] in which audio segments between 2 and 5 seconds were considered, in this work we used the full audio of each speaker intervention without slicing it since we considered this could be a more representative unit for emotional recognition.The audio files in which speakers could not be told apart and the ones that were not related to the debates (music, ads, etc.) were removed from the corpus.…”
Section: Spanishmentioning
confidence: 99%