Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.545
|View full text |Cite
|
Sign up to set email alerts
|

How to Ask Good Questions? Try to Leverage Paraphrases

Abstract: Given a sentence and its relevant answer, how to ask good questions is a challenging task, which has many real applications. Inspired by human's paraphrasing capability to ask questions of the same meaning but with diverse expressions, we propose to incorporate paraphrase knowledge into question generation(QG) to generate human-like questions. Specifically, we present a two-hand hybrid model leveraging a self-built paraphrase resource, which is automatically conducted by a simple back-translation method. On th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 29 publications
(19 citation statements)
references
References 31 publications
0
19
0
Order By: Relevance
“…We find that our full model outperforms the best previous paragraph-level model: S2S+GSA+MP [3] on all evaluations except BLEU-1, which indicates that our method performs better on the modeling of paragraph context. We also achieve higher scores than the recently promising methods with multi-task [7,8], it is worth noting that our method does not resort to any auxiliary task, thus any ingenious training method can be employed for further promotion. Effect of ALBERT We remove the ALBERT in this ablation, we can see that there is a significant improvement especially on METEOR, which benefits from the contextualization power of ALBERT.…”
Section: Results and Analysismentioning
confidence: 76%
See 2 more Smart Citations
“…We find that our full model outperforms the best previous paragraph-level model: S2S+GSA+MP [3] on all evaluations except BLEU-1, which indicates that our method performs better on the modeling of paragraph context. We also achieve higher scores than the recently promising methods with multi-task [7,8], it is worth noting that our method does not resort to any auxiliary task, thus any ingenious training method can be employed for further promotion. Effect of ALBERT We remove the ALBERT in this ablation, we can see that there is a significant improvement especially on METEOR, which benefits from the contextualization power of ALBERT.…”
Section: Results and Analysismentioning
confidence: 76%
“…PG enhanced QG [8], a sentence-level model that is assisted by auxiliary paraphrase generation task.…”
Section: Evaluaiton Metrics and Baselinesmentioning
confidence: 99%
See 1 more Smart Citation
“…TwitterPPDB subset of AP T 5 2005; Fernando and Stevenson, 2008;Socher et al, 2011;Jia et al, 2020) is an important task in the field of NLP, finding downstream applications in machine translation (Callison-Burch et al, 2006;Apidianaki et al, 2018;Mayhew et al, 2020), text summarization, plagiarism detection (Hunt et al, 2019), question answering, and sentence simplification (Guo et al, 2018). Paraphrases have proven to be a crucial part of NLP and language education, with research showing that paraphrasing helps improve reading comprehension skills (Lee and Colln, 2003;Hagaman and Reid, 2008).…”
Section: Related Workmentioning
confidence: 99%
“…and Qi et al (2020) define a question utility function to guide the generation of conversational questions. Nakanishi et al (2019); Jia et al (2020) incorporate knowledge with auxiliary tasks. These methods may generate irrelevant questions due to their pure generation nature.…”
Section: Related Workmentioning
confidence: 99%