“…QG has progressed rapidly due to new datasets and model improvements. Many different QG models have been proposed, starting for simple vanilla Sequence to Sequence Neural Networks models (seq2seq) (Du et al, 2017;Zhou et al, 2017;Yuan et al, 2017) to the more recent transformer-based models (Dong et al, 2019;Chan and Fan, 2019;Varanasi et al, 2020;Narayan et al, 2020;Bao et al, 2020). Some QG systems use manual linguistic features in their models (Harrison and Walker, 2018;Khullar et al, 2018;Liu et al, 2019a;Dhole and Manning, 2020), some consider how to select question-worthy content (Du and Cardie, 2017;Scialom et al, 2019;, and some systems explicitly model question types (Duan et al, 2017;Sun et al, 2018;Kang et al, 2019;Zhou et al, 2019).…”