Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval 2017
DOI: 10.1145/3077136.3080757
|View full text |Cite
|
Sign up to set email alerts
|

Large-Scale Goodness Polarity Lexicons for Community Question Answering

Abstract: We transfer a key idea from the field of sentiment analysis to a new domain: community question answering (cQA). e cQA task we are interested in is the following: given a question and a thread of comments, we want to re-rank the comments, so that the ones that are good answers to the question would be ranked higher than the bad ones. We notice that good vs. bad comments use specific vocabulary and that one can o en predict the goodness/badness of a comment even ignoring the question, based on the comment conte… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2017
2017
2019
2019

Publication Types

Select...
3
2
1

Relationship

5
1

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 22 publications
0
5
0
Order By: Relevance
“…Moreover, given the recent success of deep neural networks, which largely eliminate the need for manual feature engineering, we are interested in exploring various deep neural network architectures, e.g., based on LSTMs, GRUs, CNNs, RNMs, etc., as well as various ways to generate high-quality word embeddings (Mihaylov and Nakov 2016b) and goodness polarity lexicons (Balchev et al 2016; Mihaylov et al 2017), e.g., as was done for SemEval-2016 Task 3 for English. We would also like to compare side-by-side deep neural networks to convolutional syntactic kernels, and to investigate ways to combine them.…”
Section: Resultsmentioning
confidence: 99%
“…Moreover, given the recent success of deep neural networks, which largely eliminate the need for manual feature engineering, we are interested in exploring various deep neural network architectures, e.g., based on LSTMs, GRUs, CNNs, RNMs, etc., as well as various ways to generate high-quality word embeddings (Mihaylov and Nakov 2016b) and goodness polarity lexicons (Balchev et al 2016; Mihaylov et al 2017), e.g., as was done for SemEval-2016 Task 3 for English. We would also like to compare side-by-side deep neural networks to convolutional syntactic kernels, and to investigate ways to combine them.…”
Section: Resultsmentioning
confidence: 99%
“…In future work, we plan to model text complexity (Mihaylova et al, 2016), veracity (Mihaylova et al, 2018, speech act (Joty and Hoque, 2016), user profile (Mihaylov et al, 2015), trollness (Mihaylov et al, 2018), and goodness polarity Mihaylov et al, 2017). From a modeling perspective, we want to strongly couple CRF and DNN, so that the global errors are backpropagated from the CRF down to the DNN layers.…”
Section: Discussionmentioning
confidence: 99%
“…Finally, we should mention some interesting features used by the participating systems across all three subtasks. This includes fine-tuned word embeddings 5 (Mihaylov and Nakov, 2016b); features modeling text complexity, veracity, and user trollness 6 (Mihaylova et al, 2016); sentiment polarity features (Nicosia et al, 2015); and PMI-based goodness polarity lexicons Mihaylov et al, 2017a).…”
Section: Related Workmentioning
confidence: 99%