2018
DOI: 10.1111/coin.12189
|View full text |Cite
|
Sign up to set email alerts
|

Exploring deep neural networks for multitarget stance detection

Abstract: Detecting subjectivity expressed toward concerned targets is an interesting problem and has received intensive study. Previous work often treated each target independently, ignoring the potential (sometimes very strong) dependency that could exist among targets (eg, the subjectivity expressed toward two products or two political candidates in an election). In this paper, we relieve such an independence assumption in order to jointly model the subjectivity expressed toward multiple targets. We propose and show … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
38
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 33 publications
(38 citation statements)
references
References 29 publications
0
38
0
Order By: Relevance
“…Overall, MTL-LP is the best performing multilabel classification method across all the datasets. MTL-LP is also better than the best performing model Seq2Seq reported in Sobhani et al (2019) for the ETC dataset. MTL-XLD improves on the baseline models for the BBC and MFTC datasets, but performs slightly worse than MTL on the ETC dataset.…”
Section: Resultsmentioning
confidence: 83%
See 1 more Smart Citation
“…Overall, MTL-LP is the best performing multilabel classification method across all the datasets. MTL-LP is also better than the best performing model Seq2Seq reported in Sobhani et al (2019) for the ETC dataset. MTL-XLD improves on the baseline models for the BBC and MFTC datasets, but performs slightly worse than MTL on the ETC dataset.…”
Section: Resultsmentioning
confidence: 83%
“…Such cases need to be formulated as Brexit Blog Corpus (Simaki et al, 2018) Utterance: rivalry between the us and china is inevitable but it needs to be kept within bounds that would preclude the use of military force. Stances: certainty, contrariety, necessity, prediction US Election Twitter Corpus (Sobhani et al, 2019) Utterance: voters mean more than super delegates @sensanders corrupt -> #hillaryclinton spends millions on msm to discourage #americans voting #sanders Stances: Clinton: AGAINST, Sanders: FAVOR Moral Foundations Twitter (Dehghani et al, 2019) Utterance: blatant racism in #colorado, #blacklivesmatter http://fb.me/1ibyxmswm Stances: cheating, harm multilabel classification (Sorower, 2010), where an instance can receive multiple, non-mutually exclusive labels. The most commonly used approaches to multiclass classification treat the task by learning models for each label.…”
Section: Introductionmentioning
confidence: 99%
“…Gamalet et al [47] used ten different ML algorithms with two feature extraction algorithms that were implemented on four SA datasets (IMDB, Cornell movies, Amazon and Twitter) in a comparative analysis of their methodology. Sobhani et al [48] investigate the problem of jointly predicting the stance expressed toward multiple targets using Twitter posts. Stance detection is the task of automatically determining from the text whether the author of the text is for, against, or has a neutral view towards a proposition or target.…”
Section: Theoretical Frameworkmentioning
confidence: 99%
“…We compared the performance of our proposed method (MKC-LSTMVs-ATT) with the state-of-the-art multi-target stance detection methods including Seq2Seq method proposed by Sobhani et al [50] and DMAN method proposed by Wei et al [23]. The comparative results are presented in Table 7.…”
Section: Comparison With Related Workmentioning
confidence: 99%
“…The comparative results are presented in Table 7. Seq2Seq [50] is the attention-based encoder-decoder deep neural network model for multi-target stance detection. The encoder converted the input text into a vector representation, and the decoder generates stance labels towards multiple targets.…”
Section: Comparison With Related Workmentioning
confidence: 99%