2019
DOI: 10.1007/978-3-030-36802-9_70
|View full text |Cite
|
Sign up to set email alerts
|

Attention Based Shared Representation for Multi-task Stance Detection and Sentiment Analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 11 publications
0
5
0
Order By: Relevance
“…In their study, the researchers used a sentiment lexicon along with to build a stance lexicon and incorporated it for the attention layer in the model. Another work by [47] used sentiment as an auxiliary task to predict the stance using a SemEval stance dataset.…”
Section: Stance Vs Sentiment Quantitativelymentioning
confidence: 99%
“…In their study, the researchers used a sentiment lexicon along with to build a stance lexicon and incorporated it for the attention layer in the model. Another work by [47] used sentiment as an auxiliary task to predict the stance using a SemEval stance dataset.…”
Section: Stance Vs Sentiment Quantitativelymentioning
confidence: 99%
“…Their MTL model achieved a 14.4% higher macro-F1 score than state-of-the-art (SOTA) models on the FNC-1 dataset [49]. Only a few studies have investigated the joint learning of stance and sentiment detection through MTL [10,11,50,51]. Sun et al [10] proposed an LSTM-based model to simultaneously capture the stance and sentiment information of a post; however, their model does not incorporate an attention mechanism.…”
Section: Related Workmentioning
confidence: 99%
“…Equal weighting assigns the same weight to each task loss. In the domain of stance detection, all existing studies that have proposed MTL models have uniformly adopted equal weighting for the associated tasks [10,11,48,50,51,59,60]. While this approach is straightforward, it operates under the assumption that all tasks bear equal importance, an assumption that does not hold true for various NLP tasks.…”
Section: Related Workmentioning
confidence: 99%
“…The study by (Li and Caragea 2019) uses sentiment to predict the stance through a multi-task learning model. Another work by (Chauhan, Kumar, and Ekbal 2019) uses sentiment as an auxiliary task to predict attitude. However, in our work, multiple features in the form of text and topic words are used to separate the task-dependent and independent feature spaces and perform both tasks simultaneously by using attention frameworks to focus on the most important feature representations and discarding the useless shared features that may affect the performance of both tasks.…”
Section: Related Workmentioning
confidence: 99%