Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017) 2017
DOI: 10.18653/v1/s17-2003
|View full text |Cite
|
Sign up to set email alerts
|

SemEval-2017 Task 3: Community Question Answering

Abstract: We describe SemEval2017 Task 3 on Community Question Answering.This year, we reran the four subtasks from SemEval-2016: (A) Question-Comment Similarity, (B) Question-Question Similarity, (C) QuestionExternal Comment Similarity, and (D) Rerank the correct answers for a new question in Arabic, providing all the data from 2015 and 2016 for training, and fresh data for testing. Additionally, we added a new subtask E in order to enable experimentation with Multi-domain Question Duplicate Detection in a larger-scale… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
166
0
5

Year Published

2017
2017
2020
2020

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 181 publications
(171 citation statements)
references
References 65 publications
0
166
0
5
Order By: Relevance
“…All code was implemented in Python 3.5. For data extraction, we converted the XML documents provided by (Nakov et al, 2017) into pandas DataFrames, retaining the subject text, body text, and metadata related to the original and related questions. The feature extraction and the pairwise-preference learning phase are described below.…”
Section: System Descriptionmentioning
confidence: 99%
“…All code was implemented in Python 3.5. For data extraction, we converted the XML documents provided by (Nakov et al, 2017) into pandas DataFrames, retaining the subject text, body text, and metadata related to the original and related questions. The feature extraction and the pairwise-preference learning phase are described below.…”
Section: System Descriptionmentioning
confidence: 99%
“…For tuning the parameters and seeking the best combination of features, we train SVM with a linear kernel on TRAIN dataset, and applied the model on DEV dataset. We choose two best cost-parameters C with specific feature combinations in Table 2 (Nakov et al, , 2017.…”
Section: Feature Selectionmentioning
confidence: 99%
“…One way to solve this problem is to design systems to automatically find similar content (question, answer, comment) to the user's posted question. SemEval-2017 task 3 (Nakov et al, 2017) focuses on solving this problem in community question answer by various subtasks of ranking relevant information in Qatar living forums data. The system presented in this paper focuses on subtask B, to re-rank given set of questions retrieved by search engine, in their similarity to original question.…”
Section: Introductionmentioning
confidence: 99%