2014 47th Hawaii International Conference on System Sciences 2014
DOI: 10.1109/hicss.2014.180
|View full text |Cite
|
Sign up to set email alerts
|

Questioning the Question -- Addressing the Answerability of Questions in Community Question-Answering

Abstract: In this paper, we investigate question quality among questions posted in Yahoo! Answers to assess what factors contribute to the goodness of a question and determine if we can flag poor quality questions. Using human assessments of whether a question is good or bad and extracted textual features from the questions, we built an SVM classifier that performed with relatively good classification accuracy for both good and bad questions. We then enhanced the performance of this classifier by using additional human … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
12
0
1

Year Published

2015
2015
2022
2022

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 20 publications
(14 citation statements)
references
References 24 publications
1
12
0
1
Order By: Relevance
“…Several approaches exist to identifying Best Answers. Liu et al () generalized these approaches into an Asker Satisfaction Prediction (ASP) framework, including textual and semantic features of questions and answers, history of answer satisfaction by category, and past activity history of askers and answerers (Belkin et al, ; Jeon et al, ; Liu et al, ; Agichtein et al, ; Shah & Pomerantz, ; Shah, Kitzie, & Choi, ). Other studies have attempted to both add classification features and use other evaluative baselines for answer quality using human‐based assessments.…”
Section: Introductionsupporting
confidence: 83%
“…Several approaches exist to identifying Best Answers. Liu et al () generalized these approaches into an Asker Satisfaction Prediction (ASP) framework, including textual and semantic features of questions and answers, history of answer satisfaction by category, and past activity history of askers and answerers (Belkin et al, ; Jeon et al, ; Liu et al, ; Agichtein et al, ; Shah & Pomerantz, ; Shah, Kitzie, & Choi, ). Other studies have attempted to both add classification features and use other evaluative baselines for answer quality using human‐based assessments.…”
Section: Introductionsupporting
confidence: 83%
“…A core issue in the literature on social Q&A interactions and online information seeking is the idea that some information requests are better than others [60][61][62][63] with good questions seen as those which are more likely to receive an answer [60]. Choi et al [60], looking at factual questions on Yahoo Answers!, found that textual features, such as the level of clarity in a question, can be important in predicting if a question will receive an answer of not.…”
Section: Good and Bad Questions: Answer Success And Failurementioning
confidence: 99%
“…[66]. Shah et al proposed different types of questions on social Q&A sites: factual, advice, opinion seeking and social questions and demonstrated that adding information on question type could improve the performance of automatic classifiers based on textual features [63]. The implications from much of this work are that systems may wish to understand what is the real need faced by the poster and tackle that.…”
Section: Moderation and Automatic Classificationmentioning
confidence: 99%
“…The ratio of words of four or more letters to the total number of words [40,43] The ratio of words of six or more letters to the total number of words…”
Section: Richnessmentioning
confidence: 99%