2022
DOI: 10.48550/arxiv.2203.08685
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Feasibility Study of Answer-Agnostic Question Generation for Education

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(5 citation statements)
references
References 0 publications
0
5
0
Order By: Relevance
“…We based our choice of the model on the findings by Dong et al (2019) and , who showed that language models that are fine-tuned jointly on Question Answering and Question Generation, outperform individual models fine-tuned independently on those tasks. More specifically, we use the model by Dugan et al (2022) and make slight modifications. Dugan et al (2022) used a T5 model fine-tuned on SQuAD and further fine-tuned it on three tasks simultaneously: Question Generation (GQ), Question Answering (QA), and Answer Extraction (AE).…”
Section: Question-answer Generationmentioning
confidence: 99%
See 4 more Smart Citations
“…We based our choice of the model on the findings by Dong et al (2019) and , who showed that language models that are fine-tuned jointly on Question Answering and Question Generation, outperform individual models fine-tuned independently on those tasks. More specifically, we use the model by Dugan et al (2022) and make slight modifications. Dugan et al (2022) used a T5 model fine-tuned on SQuAD and further fine-tuned it on three tasks simultaneously: Question Generation (GQ), Question Answering (QA), and Answer Extraction (AE).…”
Section: Question-answer Generationmentioning
confidence: 99%
“…More specifically, we use the model by Dugan et al (2022) and make slight modifications. Dugan et al (2022) used a T5 model fine-tuned on SQuAD and further fine-tuned it on three tasks simultaneously: Question Generation (GQ), Question Answering (QA), and Answer Extraction (AE). They also included a summarization module to create lexically diverse question-answer pairs.…”
Section: Question-answer Generationmentioning
confidence: 99%
See 3 more Smart Citations