Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016) 2016
DOI: 10.18653/v1/s16-1130
|View full text |Cite
|
Sign up to set email alerts
|

PMI-cool at SemEval-2016 Task 3: Experiments with PMI and Goodness Polarity Lexicons for Community Question Answering

Abstract: We describe our submission to SemEval-2016 Task 3 on Community Question Answering. We participated in subtask A, which asks to rerank the comments from the thread for a given forum question from good to bad. Our approach focuses on the generation and use of goodness polarity lexicons, similarly to the sentiment polarity lexicons, which are very popular in sentiment analysis. In particular, we use a combination of bootstrapping and pointwise mutual information to estimate the strength of association between a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
3
2
2

Relationship

5
2

Authors

Journals

citations
Cited by 17 publications
(11 citation statements)
references
References 18 publications
(14 reference statements)
0
11
0
Order By: Relevance
“…Moreover, given the recent success of deep neural networks, which largely eliminate the need for manual feature engineering, we are interested in exploring various deep neural network architectures, e.g., based on LSTMs, GRUs, CNNs, RNMs, etc., as well as various ways to generate high-quality word embeddings (Mihaylov and Nakov 2016b) and goodness polarity lexicons (Balchev et al 2016; Mihaylov et al 2017), e.g., as was done for SemEval-2016 Task 3 for English. We would also like to compare side-by-side deep neural networks to convolutional syntactic kernels, and to investigate ways to combine them.…”
Section: Resultsmentioning
confidence: 99%
“…Moreover, given the recent success of deep neural networks, which largely eliminate the need for manual feature engineering, we are interested in exploring various deep neural network architectures, e.g., based on LSTMs, GRUs, CNNs, RNMs, etc., as well as various ways to generate high-quality word embeddings (Mihaylov and Nakov 2016b) and goodness polarity lexicons (Balchev et al 2016; Mihaylov et al 2017), e.g., as was done for SemEval-2016 Task 3 for English. We would also like to compare side-by-side deep neural networks to convolutional syntactic kernels, and to investigate ways to combine them.…”
Section: Resultsmentioning
confidence: 99%
“…Finally, we should mention some interesting features used by the participating systems across all three subtasks. This includes fine-tuned word embeddings 5 (Mihaylov and Nakov, 2016b); features modeling text complexity, veracity, and user trollness 6 (Mihaylova et al, 2016); sentiment polarity features (Nicosia et al, 2015); and PMI-based goodness polarity lexicons (Balchev et al, 2016;Mihaylov et al, 2017a).…”
Section: Related Workmentioning
confidence: 99%
“…Following (Balchev et al, 2016), we build this lexicon using pointwise mutual information, starting with the training data from SemEval-2016 task 3, and then extending this to words from the Qatar Living dump. We use the same nine features as for sentiment, but this time we only have one lexicon and we only use words (no bigrams).…”
Section: Answer Features Credibility (31 Features)mentioning
confidence: 99%
“…GOODNESS (9 features) Similarly, we build goodness polarity lexicons that contain 41,633 words, each associated with a real number representing its strength of association with Good or Bad answers. Following (Balchev et al, 2016), we build this lexicon using pointwise mutual information, starting with the training data from SemEval-2016 task 3, and then extending this to words from the Qatar Living dump. We use the same nine features as for sentiment, but this time we only have one lexicon and we only use words (no bigrams).…”
Section: Credibility (31 Features)mentioning
confidence: 99%