Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.530
|View full text |Cite
|
Sign up to set email alerts
|

Inquisitive Question Generation for High Level Text Comprehension

Abstract: Inquisitive probing questions come naturally to humans in a variety of settings, but is a challenging task for automatic systems. One natural type of question to ask tries to fill a gap in knowledge during text comprehension, like reading a news article: we might ask about background information, deeper reasons behind things occurring, or more. Despite recent progress with data-driven approaches, generating such questions is beyond the range of models trained on existing datasets.We introduce INQUISITIVE, a da… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

2
47
0

Year Published

2020
2020
2025
2025

Publication Types

Select...
6
2
2

Relationship

0
10

Authors

Journals

citations
Cited by 25 publications
(49 citation statements)
references
References 34 publications
2
47
0
Order By: Relevance
“…Minsky's frames (1975) embodied this idea, where he defined a frame as "a collection of questions to be asked about a hypothetical situation", and whose answers helped elaborate the situation to aid question-answering. Other studies have identified what questions people naturally ask when reading text (Ko et al, 2020) or viewing images (Mostafazadeh et al, 2016), to form a mental picture of what might be happening. Our work draws on these ideas to explore how coherent a LM's mental picture is, and how it can be improved.…”
Section: Related Workmentioning
confidence: 99%
“…Minsky's frames (1975) embodied this idea, where he defined a frame as "a collection of questions to be asked about a hypothetical situation", and whose answers helped elaborate the situation to aid question-answering. Other studies have identified what questions people naturally ask when reading text (Ko et al, 2020) or viewing images (Mostafazadeh et al, 2016), to form a mental picture of what might be happening. Our work draws on these ideas to explore how coherent a LM's mental picture is, and how it can be improved.…”
Section: Related Workmentioning
confidence: 99%
“…Traditionally, QG is tackled by rule-based methods (Heilman and Smith, 2010;Labutov et al, 2015;Dhole and Manning, 2020) that rely heavily on extensive hand-crafted rules. Different from these, neural network-based methods are completely data-driven and trainable in an end-to-end fashion (Du et al, 2017;Song et al, 2018;Kim et al, 2018;Nema et al, 2019;Zhou et al, 2019a;Zhou et al, 2019b;Jia et al, 2020b;Ko et al, 2020). For better representing the input context, the answer position and token lexical features (e.g.…”
Section: Related Workmentioning
confidence: 99%
“…Instead of asking factual questions with answers already present in the text, Ko et al (2020) are curiosity-driven, answer-agnostic, and seek a high-level understanding of the document being read. They released a dataset of such curiositydriven questions (henceforth INQUISITIVE; for details, see Section 2).…”
Section: Introductionmentioning
confidence: 99%