Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016) 2016
DOI: 10.18653/v1/s16-1068
|View full text |Cite
|
Sign up to set email alerts
|

NLDS-UCSC at SemEval-2016 Task 6: A Semi-Supervised Approach to Detecting Stance in Tweets

Abstract: Stance classification aims to identify, for a particular issue under discussion, whether the speaker or author of a conversational turn has Pro (Favor) or Con (Against) stance on the issue. Detecting stance in tweets is a new task proposed for SemEval-2016 Task6, involving predicting stance for a dataset of tweets on the topics of abortion, atheism, climate change, feminism and Hillary Clinton. Given the small size of the dataset, our team created our own topic-specific training corpus by developing a set of h… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
3
3

Relationship

1
9

Authors

Journals

citations
Cited by 19 publications
(13 citation statements)
references
References 17 publications
0
12
0
Order By: Relevance
“…Elsewhere, we present algorithms for automatically identifying and bootstrapping arguments with different properties. We have methods to extract arguments that represent different stances on an issue (Misra et al, 2016a;Anand et al, 2011;Sridhar et al, 2015;Walker et al, 2012a;Walker et al, 2012b), as well as argument exchanges that are agreements vs. disagreements , factual vs. emotional arguments (Oraby et al, 2015), sarcastic and not-sarcastic arguments, and nasty vs. nice arguments (Oraby et al, 2016;Lukin and Walker, 2013;Justo et al, 2014).…”
Section: Discussionmentioning
confidence: 99%
“…Elsewhere, we present algorithms for automatically identifying and bootstrapping arguments with different properties. We have methods to extract arguments that represent different stances on an issue (Misra et al, 2016a;Anand et al, 2011;Sridhar et al, 2015;Walker et al, 2012a;Walker et al, 2012b), as well as argument exchanges that are agreements vs. disagreements , factual vs. emotional arguments (Oraby et al, 2015), sarcastic and not-sarcastic arguments, and nasty vs. nice arguments (Oraby et al, 2016;Lukin and Walker, 2013;Justo et al, 2014).…”
Section: Discussionmentioning
confidence: 99%
“…In order to reduce this dependency on labeled dataset, there is a line of research works that have proposed machine learning frameworks that require weak supervision. Weak supervised stance detection works rely on different characteristics related to an event, like stance related information from annotated hashtags [13,24,32], user behavior and information [16,22,42], specific key-phrases related to events [19] and topical information [12]. However, all of these approaches require annotated event-specific twitter handles, hashtags or key-phrases.…”
Section: Weak Supervised Approachmentioning
confidence: 99%
“…Another approach to the SemEval 2016 task, Misra et al (2016) retrieved tweets with hashtags that were "stancebearing on their own", and labelled them accordingly. These tweets were then used as labelled examples to train a classifier.…”
Section: Relevant Workmentioning
confidence: 99%