Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1552
|View full text |Cite
|
Sign up to set email alerts
|

Emotion Detection with Neural Personal Discrimination

Abstract: There have been a recent line of works to automatically predict the emotions of posts in social media. Existing approaches consider the posts individually and predict their emotions independently. Different from previous researches, we explore the dependence among relevant posts via the authors' backgrounds, since the authors with similar backgrounds, e.g., gender, location, tend to express similar emotions. However, such personal attributes are not easy to obtain in most social media websites, and it is hard … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(1 citation statement)
references
References 21 publications
0
1
0
Order By: Relevance
“…Other works (Zhou et al, 2019;Spiliopoulou et al, 2020) used the multi-task technique alongside adversarial learning, but in other configurations than ASP-MTL. For example, Zhou et al (2019) employed a model that has a shared feature extractor, and then it is followed by branches for each task with an attention mechanism for the first layer from each branch. By adding new branches to the model, as well as adversarial learning, the F1-score increased when compared to the baseline model.…”
Section: Related Workmentioning
confidence: 99%
“…Other works (Zhou et al, 2019;Spiliopoulou et al, 2020) used the multi-task technique alongside adversarial learning, but in other configurations than ASP-MTL. For example, Zhou et al (2019) employed a model that has a shared feature extractor, and then it is followed by branches for each task with an attention mechanism for the first layer from each branch. By adding new branches to the model, as well as adversarial learning, the F1-score increased when compared to the baseline model.…”
Section: Related Workmentioning
confidence: 99%