Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1415
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Ask Unanswerable Questions for Machine Reading Comprehension

Abstract: Machine reading comprehension with unanswerable questions is a challenging task. In this work, we propose a data augmentation technique by automatically generating relevant unanswerable questions according to an answerable question paired with its corresponding paragraph that contains the answer. We introduce a pair-to-sequence model for unanswerable question generation, which effectively captures the interactions between the question and the paragraph. We also present a way to construct training data for our … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
31
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 35 publications
(32 citation statements)
references
References 32 publications
0
31
1
Order By: Relevance
“…Hu et al (2019) address unanswerability of questions from a given text using additional verification steps. Other approaches have shown the benefit of synthetic data to improve performance in SQuAD2.0 (Zhu et al, 2019;Alberti et al, 2019). In contrast to prior work, we demonstrate that despite improving performance on test sets that include unanswerable questions, the problem persists when adversarially choosing from a larger space of questions.…”
Section: Concurrent Work On Modelcontrasting
confidence: 71%
“…Hu et al (2019) address unanswerability of questions from a given text using additional verification steps. Other approaches have shown the benefit of synthetic data to improve performance in SQuAD2.0 (Zhu et al, 2019;Alberti et al, 2019). In contrast to prior work, we demonstrate that despite improving performance on test sets that include unanswerable questions, the problem persists when adversarially choosing from a larger space of questions.…”
Section: Concurrent Work On Modelcontrasting
confidence: 71%
“…As our goal is to provide a broad suite of questions that test a single model's reading ability, we additionally provide synthetic augmentations to some of the datasets in our evaluation server. Several recent papers have proposed question transformations that result in out-of-distribution test examples, helping to judge the generalization capability of reading models (Ribeiro et al, 2018(Ribeiro et al, , 2019Zhu et al, 2019). We collect the best of these, add some of our own, and keep those that generate reasonable and challenging questions.…”
Section: Introductionmentioning
confidence: 99%
“…When we extensively tested the Question Answering System keeping in mind how the answer is generated, it is found that Question Comprehension plays a significant role in the question answering system [38]. Also, systems like [46] introduce a pair-to-sequence model that captures the interaction between the question asked and the given paragraph. Specific systems like ParaQG [20] try to generate the questions from the paragraph.…”
Section: Related Workmentioning
confidence: 99%