Proceedings of the 2018 Conference of the North American Chapter Of the Association for Computational Linguistics: Hu 2018
DOI: 10.18653/v1/n18-2089
|View full text |Cite
|
Sign up to set email alerts
|

Crowdsourcing Question-Answer Meaning Representations

Abstract: We introduce Question-Answer Meaning Representations (QAMRs), which represent the predicate-argument structure of a sentence as a set of question-answer pairs. We also develop a crowdsourcing scheme to show that QAMRs can be labeled with very little training, and gather a dataset with over 5,000 sentences and 100,000 questions. A detailed qualitative analysis demonstrates that the crowd-generated question-answer pairs cover the vast majority of predicate-argument relationships in existing datasets (including P… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
39
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 51 publications
(39 citation statements)
references
References 22 publications
0
39
0
Order By: Relevance
“…Question-Answer Meaning Representation, or QAMR (Michael et al, 2018), was recently proposed as an extension of QA-SRL. Like QA-SRL, QAMR represents predicate-argument structure with a set of question-answer pairs about a sentence, where each answer is a span from the sentence.…”
Section: The Qamr Corpusmentioning
confidence: 99%
See 1 more Smart Citation
“…Question-Answer Meaning Representation, or QAMR (Michael et al, 2018), was recently proposed as an extension of QA-SRL. Like QA-SRL, QAMR represents predicate-argument structure with a set of question-answer pairs about a sentence, where each answer is a span from the sentence.…”
Section: The Qamr Corpusmentioning
confidence: 99%
“…As described by Michael et al (2018), QAMR annotations were gathered via crowdsourcing in a two-stage pipeline over Wikipedia and Wikinews text. We use the training partition of the QAMR dataset, which consists of 51,063 QA pairs over 3,938 sentences.…”
Section: The All Words Open Ie Corpusmentioning
confidence: 99%
“…We developed a taxonomy of nine reasons for why answers may differ, which are summarized in Table 1. Six of the nine reasons are inspired by the crowdsourcing literature -INV [32], DFF [44], AMB [24,26,43], SBJ [33,43,9], SYN [32], and SPM [41,42,14,15]. Two of the reasons are inspired by prior visual question answering work [20] -LQI and IVE.…”
Section: Labeled Datasetsmentioning
confidence: 99%
“…It is very likely that some complex annotation tasks require people to read dozens of pages of annotation guidelines, but once decomposed into smaller subtasks, even laymen can handle them. Annotation schemes driven by crowdsourced questionanswering, known to provide only partial coverage are successful examples of this idea (He et al, 2015;Michael et al, 2017). Therefore, this paper is hopefully interesting to a broad audience.…”
Section: Dicussion and Conclusionmentioning
confidence: 99%