2022
DOI: 10.48550/arxiv.2201.05273
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Pretrained Language Models for Text Generation: A Survey

Abstract: Text Generation aims to produce plausible and readable text in human language from input data. The resurgence of deep learning has greatly advanced this field by neural generation models, especially the paradigm of pretrained language models (PLMs). Grounding text generation on PLMs is seen as a promising direction in both academia and industry. In this survey, we present the recent advances achieved in the topic of PLMs for text generation. In detail, we begin with introducing three key points of applying PLM… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 21 publications
(21 citation statements)
references
References 150 publications
0
21
0
Order By: Relevance
“…. , y n ) conditioned on input data X (e.g., one or more pieces of text and structured data) [78]. Typically, NLG tasks are categorized according to the data format of X and Y .…”
Section: Data Collectionmentioning
confidence: 99%
“…. , y n ) conditioned on input data X (e.g., one or more pieces of text and structured data) [78]. Typically, NLG tasks are categorized according to the data format of X and Y .…”
Section: Data Collectionmentioning
confidence: 99%
“…The main goal of our work is to improve the global diversity of questions generated by an underlying pre-trained Transformer [18] model. Most existing works have fine-tuned pre-trained Transformer models with the primary objective of maximizing question generation likelihood; while diversity was commonly dealt only as a secondary objective during inference time [8,23]. Compared to [9], which have utilized the Transformer model for the same task, we do not use any auxiliary data.…”
Section: Main Differencesmentioning
confidence: 99%
“…To this end, using a SentenceTransformer [14] dedicated for paraphrasing tasks, for each product category, we obtain embeddings for the test-set generated questions of both T5 and T5+LTD. Next, using cosine similarity as the "distance" metric, we obtain question clusters 8 for each alternative and measure the number of clusters obtained for increasing (dissimilarity) thresholds. We report the results in Figure 2, illustrating the relative improvement (green bars) or degradation (red bars) of T5+LTD compared to T5.…”
Section: Rq3mentioning
confidence: 99%
See 1 more Smart Citation
“…The recent progress with Language Models (LM) in Natural Language Processing (NLP) has made it even easier, cheaper, and faster for a machine to generate artificial text [3]. Generative LMs have become sophisticated such as GPT-3 at text generation, and their results are almost on par with humanwritten text in terms of readability, coherence, and grammatical accuracy [4].…”
Section: Introductionmentioning
confidence: 99%