Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016) 2016
DOI: 10.18653/v1/s16-1062
|View full text |Cite
|
Sign up to set email alerts
|

pkudblab at SemEval-2016 Task 6 : A Specific Convolutional Neural Network System for Effective Stance Detection

Abstract: In this paper, we develop a convolutional neural network for stance detection in tweets. According to the official results, our system ranks 1 st on subtask B (among 9 teams) and ranks 2 nd on subtask A (among 19 teams) on the twitter test set of SemEval2016 Task 6. The main contribution of our work is as follows. We design a "vote scheme" for prediction instead of predicting when the accuracy of validation set reaches its maximum. Besides, we make some improvement on the specific subtasks. For subtask A, we s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
100
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 120 publications
(100 citation statements)
references
References 8 publications
0
100
0
Order By: Relevance
“…Another study [45] follows the same line by constructing corpus that contains words that are against and in-favor each target to enrich the models. Similarly, [52] used a domain corpus related to Trump along with lexicon to construct a labeled dataset to detect stance towards Trump. Furthermore, [7] used context of the users tweets to construct author embedding and predict the stance.…”
Section: Related Workmentioning
confidence: 99%
“…Another study [45] follows the same line by constructing corpus that contains words that are against and in-favor each target to enrich the models. Similarly, [52] used a domain corpus related to Trump along with lexicon to construct a labeled dataset to detect stance towards Trump. Furthermore, [7] used context of the users tweets to construct author embedding and predict the stance.…”
Section: Related Workmentioning
confidence: 99%
“…Thus, the settings of both above researches were different from the settings of the SemEval-2016 Task 6.A. [16,19] ignored the target information while performing classification, whereas our experiments have clearly proven that the target-specific vector representation of tweets can substantially boost the performance. [9] relied on feature engineering and large domain corpus, to perform feature selection, which was hard to generalise to other targets; and the collection of domain corpus additionally added difficulty, because of the limitations of the Twitter API.…”
Section: Related Workmentioning
confidence: 79%
“…We compared the proposed models with the two best performing models in the SemEval-2016 Task 6.A: (1) MITRE [19], which trained separate Long ShortTerm Memory (LSTM) networks with a voting scheme for different targetsthe LSTM networks were pre-trained, by an auxiliary hashtag prediction task on 298,973 self-collected tweets; (2) pkudblab [16], which also trained separate CNN classifiers for different targets, with a voting scheme employed both in and out of each epoch, to improve the performance. We also compared against the SVM classifiers trained on the corresponding training datasets for the five targets, using word n-grams and character n-grams features, as reported in [8], representing the previous best performer for this task.…”
Section: Comparison Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…Table III shows the comparison of our proposed system with the existing state-of-the-art system of SemEval 2016 Task 6 for the sentiment dataset. [8] used feature-based SVM, [40] used keyword rules, LitisMind relied on hashtag rules on external data, [39] utilized a combination of sentiment classifiers and rules, whereas [38] used a maximum entropy classifier with domain-specific features. Our system comfortably surpasses the existing best system at SemEval.…”
Section: Modelsmentioning
confidence: 99%