Proceedings of the 13th International Workshop on Semantic Evaluation 2019
DOI: 10.18653/v1/s19-2203
|View full text |Cite
|
Sign up to set email alerts
|

Fermi at SemEval-2019 Task 8: An elementary but effective approach to Question Discernment in Community QA Forums

Abstract: Online Community Question Answering Forums (cQA) have gained massive popularity within recent years. The rise in users for such forums have led to the increase in the need for automated evaluation for question comprehension and fact evaluation of the answers provided by various participants in the forum. Our team, Fermi, participated in sub-task A of Task 8 at SemEval 2019-which tackles the first problem in the pipeline of factual evaluation in cQA forums, i.e., deciding whether a posed question asks for a fac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 20 publications
0
3
0
Order By: Relevance
“…From the annotation, we found that factual questions are actually the dominant type in PQA forums where around 71% of the annotated questions are factual ones. Following the strategy in Syed et al (2019) which ranked first for predicting the question type in SemEval-2019 Task 8, we applied the Universal Sentence representation (Cer et al, 2018) to encode question texts. While we found that the SVM classifier performs slightly better than the XGBoost (Chen and Guestrin, 2016) used in their work, achieving average 0.85 accuracy and 0.90 F1 score under the 5-fold cross validation.…”
Section: Factual Qa Pairs Filteringmentioning
confidence: 99%
“…From the annotation, we found that factual questions are actually the dominant type in PQA forums where around 71% of the annotated questions are factual ones. Following the strategy in Syed et al (2019) which ranked first for predicting the question type in SemEval-2019 Task 8, we applied the Universal Sentence representation (Cer et al, 2018) to encode question texts. While we found that the SVM classifier performs slightly better than the XGBoost (Chen and Guestrin, 2016) used in their work, achieving average 0.85 accuracy and 0.90 F1 score under the 5-fold cross validation.…”
Section: Factual Qa Pairs Filteringmentioning
confidence: 99%
“…Fermi (Syed et al, 2019) IIIT Hyderabad, Microsoft, Teradata 0.840 0.7182 0.7353 TMLab (Niewiński et al, 2019) Samsung R&D Institute, Warsaw, Poland The best system for Subtask A was by team Fermi (IIIT Hyderabad). They used Google's Universal Sentence representation (Cer et al, 2018), and XGBoost (Chen and Guestrin, 2016).…”
Section: Team Id Affiliation Accuracy F1 Avgrecmentioning
confidence: 99%
“…Fermi (Syed et al, 2019) IIIT Hyderabad, Microsoft, Teradata 0.840 0.7182 0.7353 TMLab (Niewiński et al, 2019) Samsung (Some teams did not submit system description papers, and thus we have no citations for their systems.) ablation studies, and the experiments with different techniques are described by the participants in their respective system description papers.…”
Section: Accuracy F1 Avgrecmentioning
confidence: 99%