“…Traditional natural language generation methods rely on a large corpus, which is typically expensive to train. To address this issue, researchers have explored a new paradigm of pre-trained LMs, such as BERT (Devlin et al, 2019), GPT-2 (Radford et al, 2019), and BART (Lewis et al, 2020;Koto et al, 2020;Chen et al, 2022;Vougiouklis et al, 2020). By incorporating additional domain-specific codes (Keskar et al, 2019), such as sentiment labels (Dathathri et al, 2020) or attribute vectors (Yu, Yu, & Sagae, 2021), the goal is to modify the pre-trained LM with little fine-tuning cost.…”