2019
DOI: 10.48550/arxiv.1904.03396
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Step-by-Step: Separating Planning from Realization in Neural Data-to-Text Generation

Abstract: Data-to-text generation can be conceptually divided into two parts: ordering and structuring the information (planning), and generating fluent language describing the information (realization). Modern neural generation systems conflate these two steps into a single end-to-end differentiable system. We propose to split the generation process into a symbolic text-planning stage that is faithful to the input, followed by a neural generation stage that focuses only on realization. For training a plan-to-text gener… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
21
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(21 citation statements)
references
References 18 publications
0
21
0
Order By: Relevance
“…Modeling and Inference Method. Planning or skeleton is a common method in data-to-text tasks to improve the faithfulness to the input [124]. Liu et al [107] propose a two-step generator with a separate text planner, which is augmented by auxiliary entity information.…”
Section: Hallucination Mitigation In Data-to-text Generationmentioning
confidence: 99%
“…Modeling and Inference Method. Planning or skeleton is a common method in data-to-text tasks to improve the faithfulness to the input [124]. Liu et al [107] propose a two-step generator with a separate text planner, which is augmented by auxiliary entity information.…”
Section: Hallucination Mitigation In Data-to-text Generationmentioning
confidence: 99%
“…To improve the probability of producing high-quality texts, Zhu et al [70] proposed a model minimizing the Kullback-Leibler (KL) divergence between the distributions of the real text and generated text. Moryossef et al [40] proposed a model combining the pipeline system and neural networks to match a reference text with its corresponding text plan to train a plan-to-text generator. In addition, some work has focused solely on improving the performance of the seen split part of dataset, where the entity types have appeared in the training set.…”
Section: Rdf-to-text Generationmentioning
confidence: 99%
“…1 In neural D2T, the common approaches train a neural end-to-end encoder-decoder system that encodes the input data and decodes an output text. In recent work (Moryossef et al, 2019) we proposed to adopt ideas from "traditional" language generation approaches (i.e. Reiter and Dale (2000); Walker et al (2007); Gatt and Krahmer (2017)) that separate the generation into a planning stage that determines the order and structure of the expressed facts, and a realization stage that maps the plan to natural language text.…”
Section: Introductionmentioning
confidence: 99%
“…In this work we adopt the step-by-step framework of Moryossef et al (2019) and propose four independent extensions that improve aspects of our original system: we suggest a new plan generation mechanism, based on a trainable-yetverifiable neural decoder, that is orders of magnitude faster than the original one ( §3); we use knowledge of the plan structure to add typing information to plan elements. This improves the system's performance on unseen relations and entities ( §4); the separation of planning from realizations allows the incorporation of a simple output verification heuristic that drastically improves the correctness of the output ( §5); and finally we incorporate a post-processing referring expression generation (REG) component, as proposed but not implemented in our previous work, to improve the naturalness of the resulting output ( §6).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation