2017
DOI: 10.48550/arxiv.1706.02027
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Question Answering and Question Generation as Dual Tasks

Abstract: We study the problem of joint question answering (QA) and question generation (QG) in this paper. Our intuition is that QA and QG have intrinsic connections and these two tasks could improve each other. On one side, the QA model judges whether the generated question of a QG model is relevant to the answer. On the other side, the QG model provides the probability of generating a question given the answer, which is a useful evidence that in turn facilitates QA. In this paper we regard QA and QG as dual tasks. We… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
53
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 116 publications
(53 citation statements)
references
References 18 publications
0
53
0
Order By: Relevance
“…One may expect that Narra-tiveQA could also be used for QAG tasks. In fact, a coupe of recent works use this dataset and train a network by combining a QG module and a QA module together with a reinforcement learning approach (Tang et al, 2017). For example, Wang et al (2017) use the QA result to reward the QG module then jointly train the two sub-systems.…”
Section: Related Workmentioning
confidence: 99%
“…One may expect that Narra-tiveQA could also be used for QAG tasks. In fact, a coupe of recent works use this dataset and train a network by combining a QG module and a QA module together with a reinforcement learning approach (Tang et al, 2017). For example, Wang et al (2017) use the QA result to reward the QG module then jointly train the two sub-systems.…”
Section: Related Workmentioning
confidence: 99%
“…Iterative backtranslation (IBT) (Hoang et al 2018;Guo et al 2021) is an extension of back-translation, in which forward and backward models are trained together to generate pseudo parallel instances with each other. A similar line of studies (Su, Huang, and Chen 2020;Tang et al 2017) adopts dual learning, which incorporates the inference process of both models into training via reinforcement learning. IBT and dual learning are based on the same idea, to jointly solve tasks with duality used in machine translation.…”
Section: Related Workmentioning
confidence: 99%
“…Question generation has the potential to improve the training of QA systems (Du and Cardie, 2018;Tang et al, 2017), and help chatbots start a conversation with human users (Mostafazadeh et al, 2016). However, the well-designed templates in rule-based lack diversity, and they do not generalize to a new domain (Ali et al, 2010).…”
Section: Related Workmentioning
confidence: 99%