2022
DOI: 10.1145/3512467
|View full text |Cite
|
Sign up to set email alerts
|

A Survey of Knowledge-enhanced Text Generation

Abstract: The goal of text-to-text generation is to make machines express like a human in many applications such as conversation, summarization, and translation. It is one of the most important yet challenging tasks in natural language processing (NLP). Various neural encoder-decoder models have been proposed to achieve the goal by learning to map input text to output text. However, the input text alone often provides limited knowledge to generate the desired output, so the performance of text generation is still far fr… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
42
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
4

Relationship

2
7

Authors

Journals

citations
Cited by 127 publications
(42 citation statements)
references
References 70 publications
0
42
0
Order By: Relevance
“…Besides, leveraging knowledge graph is not the only way to promote content diversity as it is a highly knowledge-intensive task. Many existing knowledge-enhanced methods (Yu et al, 2022c) can be used to acquire different external knowledge for producing diverse outputs, e.g., taking different retrieved documents as conditions for generator.…”
Section: Future Directionsmentioning
confidence: 99%
“…Besides, leveraging knowledge graph is not the only way to promote content diversity as it is a highly knowledge-intensive task. Many existing knowledge-enhanced methods (Yu et al, 2022c) can be used to acquire different external knowledge for producing diverse outputs, e.g., taking different retrieved documents as conditions for generator.…”
Section: Future Directionsmentioning
confidence: 99%
“…However, these methods were not able to explicitly control varying semantics units and produce outputs of diverse content. Meanwhile, the input text alone contains too limited knowledge to support diverse reasoning and produce multiple reasonable outputs (Yu et al, 2022c). As an example, Table 1 shows the human evaluation results on two GCR tasks.…”
Section: Outputs: 3 Different Explanationsmentioning
confidence: 99%
“…Incorporating external knowledge is essential for many NLG tasks to augment the limited textual information (Yu et al, 2022c;Dong et al, 2021;Yu et al, 2022b). Some recent work explored using graph neural networks (GNN) to reason over multihop relational knowledge graph (KG) paths (Zhou et al, 2018;Jiang et al, 2019;Zhang et al, 2020a;Wu et al, 2020;Yu et al, 2022a;Zeng et al, 2021).…”
Section: Knowledge Graph For Text Generationmentioning
confidence: 99%
“…first-stage retrieval, aims to efficiently fetch all relevant documents for a given query from a large-scale collection with millions or billions of entries 2 . It plays indispensable roles as prerequisites for a broad spectrum of downstream tasks, e.g., information retrieval (IR) [2], open-domain question answering (ODQA) [3], knowledge-grounded conversation (KGC) [63], and recommendation system [67]. To make online large-scale retrieval possible, the common practice is to represent queries and documents by an encoder in a Siamese manner (i.e., Bi-Encoder, BE) [48].…”
Section: Introductionmentioning
confidence: 99%