Findings of the Association for Computational Linguistics: EMNLP 2020 2020
DOI: 10.18653/v1/2020.findings-emnlp.416
|View full text |Cite
|
Sign up to set email alerts
|

Multi-hop Question Generation with Graph Convolutional Network

Abstract: Multi-hop Question Generation (QG) aims to generate answer-related questions by aggregating and reasoning over multiple scattered evidence from different paragraphs. It is a more challenging yet under-explored task compared to conventional single-hop QG, where the questions are generated from the sentence containing the answer or nearby sentences in the same paragraph without complex reasoning. To address the additional challenges in multi-hop QG, we propose Multi-Hop Encoding Fusion Network for Question Gener… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
32
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 33 publications
(32 citation statements)
references
References 31 publications
0
32
0
Order By: Relevance
“…In particular, question likelihoods and rewards are designed to steer them toward being addressed by the given answers (Zhou et al, 2019a;Zhang and Bansal, 2019). Attempts are also made toward creating complex questions that require multi-hop reasoning over the given text, and graph-based representations have been an enabling tool to facilitate the access to both entities and relations (Pan et al, 2020;Su et al, 2020). While our model also enhances the input with a semantic graph, it boasts a richer representation by including both dependency and semantic relations, with predicted question focuses highlighted via extra node embeddings.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In particular, question likelihoods and rewards are designed to steer them toward being addressed by the given answers (Zhou et al, 2019a;Zhang and Bansal, 2019). Attempts are also made toward creating complex questions that require multi-hop reasoning over the given text, and graph-based representations have been an enabling tool to facilitate the access to both entities and relations (Pan et al, 2020;Su et al, 2020). While our model also enhances the input with a semantic graph, it boasts a richer representation by including both dependency and semantic relations, with predicted question focuses highlighted via extra node embeddings.…”
Section: Related Workmentioning
confidence: 99%
“…Significant progress has been made in generating factoid questions (Zhang and Bansal, 2019;Zhou et al, 2019b;Su et al, 2020), yet new challenges need to be addressed for open-ended questions. First, specifying the question type is crucial for constructing meaningful questions (Graesser et al, 1992).…”
Section: Introductionmentioning
confidence: 99%
“…3.1.3 Question generation: Generating multi-hop QA pairs requires accumulation of information across different contexts which itself is an unsolved problem [54,63,72,79,102,107,119,137,176]. As discussed above, Welbl et al [159] automatically generates questions using existing KBs where WikiData and DrugBank [163] are used as the knowledge bases, and Wikipedia and Medline 7 are used as the document corpus.…”
Section: Dataset Creationmentioning
confidence: 99%
“…Grounded generation is the task of leveraging external knowledge sources to enhance the gen-eration. Previous work has either used structured external knowledge source (Liu et al, 2018;Young et al, 2018;Su et al, 2020a) or unstructured data. introduced a document grounded dataset for text conversations, and proposed to extract lexical control phrases to do controllable grounded response generation, while Zhang et al (2021) jointly trained a retriever and generator so that annotated text reference parallel data are not needed.…”
Section: Related Workmentioning
confidence: 99%