Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1299
|View full text |Cite
|
Sign up to set email alerts
|

Enhancing Neural Data-To-Text Generation Models with External Background Knowledge

Abstract: Recent neural models for data-to-text generation rely on massive parallel pairs of data and text to learn the writing knowledge. They often assume that writing knowledge can be acquired from the training data alone. However, when people are writing, they not only rely on the data but also consider related knowledge. In this paper, we enhance neural data-totext models with external knowledge in a simple but effective way to improve the fidelity of generated text. Besides relying on parallel data and text as in … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
18
0

Year Published

2020
2020
2025
2025

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 24 publications
(18 citation statements)
references
References 21 publications
0
18
0
Order By: Relevance
“…WikiBio Dataset We list WikiBio's experimental results in Table 6 and compare with models like Table2Seq (Bao et al, 2018), Order Planning , Field Gating , Background-KB Attention (Chen et al, 2019), Hybrid Hierarchical Model (Liu et al, 2019a) trained with multiple auxiliary loss functions. We also train Template-GPT-2 on this dataset to observe pre-trained model's performance.…”
Section: E2e Challengementioning
confidence: 99%
See 1 more Smart Citation
“…WikiBio Dataset We list WikiBio's experimental results in Table 6 and compare with models like Table2Seq (Bao et al, 2018), Order Planning , Field Gating , Background-KB Attention (Chen et al, 2019), Hybrid Hierarchical Model (Liu et al, 2019a) trained with multiple auxiliary loss functions. We also train Template-GPT-2 on this dataset to observe pre-trained model's performance.…”
Section: E2e Challengementioning
confidence: 99%
“…Model BLEU Table NLM (Lebret et al, 2016) 34.70 Table2Seq (Bao et al, 2018) 40.26 Order Planning 43.91 Field-Gating 44.71 KBAtt (Chen et al, 2019) 44.59 Hierarchical+Auxiliary Loss (Liu et al, 2019a) 45…”
Section: E2e Challengementioning
confidence: 99%
“…Nie et al (2018) utilize pre-executed symbolic operations on the input table in a sequence-to-sequence model to improve the fidelity of neural table-to-text generation. Chen et al (2019) introduce the background knowledge of the entity in the table to improve results.…”
Section: Related Workmentioning
confidence: 99%
“…Another line of work attempts to use external knowledge to improve the quality of generated text (Chen et al, 2019b). These methods allow generation from an expanded external knowledge base that may contain information not relevant to the input table.…”
Section: Related Workmentioning
confidence: 99%