2020
DOI: 10.1162/coli_a_00363
|View full text |Cite
|
Sign up to set email alerts
|

Scalable Micro-planned Generation of Discourse from Structured Data

Abstract: We present a framework for generating natural language description from structured data such as tables; the problem comes under the category of data-to-text natural language generation (NLG). Modern data-to-text NLG systems typically employ end-to-end statistical and neural architectures that learn from a limited amount of task-specific labeled data, and therefore, exhibit limited scalability, domain-adaptability, and interpretability. Unlike these systems, ours is a modular, pipeline-based approach, and does … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(9 citation statements)
references
References 24 publications
0
9
0
Order By: Relevance
“…Although it is generally assumed that taskspecific parallel data is available for model training, Laha et al (2020) do away with this assumption and present a three-stage pipeline model which learns from monolingual corpora. They first convert the input to a form of tuples, which in turn are expressed in simple sentences, followed by the third stage of merging simple sentences to form more complex ones by aggregation and referring expression generation.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Although it is generally assumed that taskspecific parallel data is available for model training, Laha et al (2020) do away with this assumption and present a three-stage pipeline model which learns from monolingual corpora. They first convert the input to a form of tuples, which in turn are expressed in simple sentences, followed by the third stage of merging simple sentences to form more complex ones by aggregation and referring expression generation.…”
Section: Related Workmentioning
confidence: 99%
“…Our work also attempts to alleviate deficiencies in neural data-to-text generation models. In contrast to previous approaches, (Puduppully et al, 2019a;Moryossef et al, 2019;Laha et al, 2020), we place emphasis on macro planning and create plans representing high-level organization of a document including both its content and structure. We share with previous work (e.g., Moryossef et al 2019) the use of a two-stage architecture.…”
Section: Related Workmentioning
confidence: 99%
“…Our work also attempts to alleviate deficiencies in neural data-to-text generation models. In contrast to previous approaches, (Puduppully et al, 2019a;Moryossef et al, 2019;Laha et al, 2020), we place emphasis on macro planning and create plans representing high-level organization of a document including both its content and structure. We share with previous work (e.g., Moryossef et al 2019) the use of a two-stage architecture.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, end-to-end learning becomes a trend (Mei, Bansal, and Walter 2016;Chisholm, Radford, and Hachey 2017;Kaffee et al 2018;Jhamtani et al 2018;Bao et al 2018;Liu et al 2019a;Dušek, Novikova, and Rieser 2020) in this field. Among them, some work introduces differentiable planning modules (Sha et al 2018;Laha et al 2018;Puduppully, Dong, and Lapata 2018). Our paper focuses on a two-stage generation which incorporate separate text planner (Ferreira et al 2019;Ma et al 2019).…”
Section: Related Workmentioning
confidence: 99%