2021
DOI: 10.1162/tacl_a_00381
|View full text |Cite
|
Sign up to set email alerts
|

Data-to-text Generation with Macro Planning

Abstract: Recent approaches to data-to-text generation have adopted the very successful encoder-decoder architecture or variants thereof. These models generate text that is fluent (but often imprecise) and perform quite poorly at selecting appropriate content and ordering it coherently. To overcome some of these issues, we propose a neural model with a macro planning stage followed by a generation stage reminiscent of traditional methods which embrace separate modules for planning and surface realization. Macro plans re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
31
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 37 publications
(31 citation statements)
references
References 39 publications
0
31
0
Order By: Relevance
“…As the improvement, neural two-stage models Puduppully et al, 2019;Moryossef et al, 2019;Puduppully and Lapata, 2021;Su et al, 2021) decompose the table-to-text generation into content planning and surface generation stages. In general, content planning is implemented by Pointer Networks (Vinyals et al, 2015).…”
Section: Related Workmentioning
confidence: 99%
“…As the improvement, neural two-stage models Puduppully et al, 2019;Moryossef et al, 2019;Puduppully and Lapata, 2021;Su et al, 2021) decompose the table-to-text generation into content planning and surface generation stages. In general, content planning is implemented by Pointer Networks (Vinyals et al, 2015).…”
Section: Related Workmentioning
confidence: 99%
“…We conduct experiments using five representative Seq2Seq models on four commonly used data-totext datasets and evaluate the generated texts accordingly 2 . Note that we do not use models that are designed for specific data sets or data structures (Moryossef et al, 2019;Rebuffel et al, 2020;Puduppully and Lapata, 2021), but adopt models that allow inputs of different formats and structures, which brings convenience to comparison on different data sets. Besides, most specific models for data-to-text generation are actually based on these typical Seq2Seq models (Ferreira et al, 2019;Rebuffel et al, 2020), which also proves the rationality of our selection of these models.…”
Section: Models and Datasetsmentioning
confidence: 99%
“…To alleviate this problem, one research direction adopts coarse-to-fine progressive text generation (Tan et al, 2021). This generation paradigm has been studied in many text generation systems for specific tasks, such as datato-text generation (Moryossef et al, 2019;Puduppully and Lapata, 2021), storytelling (Goldfarb-Tarrant et al, 2020;Orbach and Goldberg, 2020), and dialogue generation (Xu et al, 2020a). Our work adopts a generative event transition planner that is trained on a large amount of event transition paths, aiming to arrange the ensuing events in open-ended text generation.…”
Section: Related Workmentioning
confidence: 99%