“…Recent advances in pre-trained language models have resulted in impressive performances on open-domain text generation, such as story com-pletion (See et al, 2019;Yao et al, 2019;Fan et al, 2019;Ippolito et al, 2020), dialogue generation (Rashkin et al, 2019b;Zhang et al, 2020b;Li, 2020;Vulić et al, 2021), question generation (Cheng et al, 2021;Wang et al, 2021), and so on. For example, in dialogue generation, Zhang et al (2020b) Despite the success of generative pre-trained language models on a series of open-ended text generation tasks, they still suffer in maintaining coherence throughout multiple sentences due to the left-to-right word-by-word generation style (Fan et al, 2019;.…”