Proceedings of the 12th International Conference on Natural Language Generation 2019
DOI: 10.18653/v1/w19-8645
|View full text |Cite
|
Sign up to set email alerts
|

Improving Quality and Efficiency in Plan-based Neural Data-to-text Generation

Abstract: We follow the step-by-step approach to neural data-to-text generation we proposed in Moryossef et al. (2019), in which the generation process is divided into a text-planning stage followed by a plan-realization stage. We suggest four extensions to that framework: (1) we introduce a trainable neural planning component that can generate effective plans several orders of magnitude faster than the original planner; (2) we incorporate typing hints that improve the model's ability to deal with unseen relations and e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(8 citation statements)
references
References 16 publications
(28 reference statements)
0
8
0
Order By: Relevance
“…LSTM/gate recurrent unit (GRU) was used by Moryossef et al. [8] to encode WebNLG graphs. Jafarbigloo et al.…”
Section: Related Workmentioning
confidence: 99%
“…LSTM/gate recurrent unit (GRU) was used by Moryossef et al. [8] to encode WebNLG graphs. Jafarbigloo et al.…”
Section: Related Workmentioning
confidence: 99%
“…Following [14], [33], [39], the final stage of the "Explainer Processor" involves linearization of the data into a flat string: p, v and the newly substituted c and f are formatted into the template below. A cap, top n, set at min(n, 10) or min(n, 20) during training, is used to limit the number of top features passed into the model and positive and negative features are subsets of the capped top features, such that top n + + top n − = top n; the lowest impact features are not affected.…”
Section: Textual Explanation Pipelinementioning
confidence: 99%
“…Shen et al (2019) used techniques from computational pragmatics and modeled the generation task as a game between speakers and listeners. Despite following the generation-reranking paradigm explored previously in the data-to-text domain (Agarwal et al, 2018;Moryossef et al, 2019a;Dušek et al, 2019), and in other domains including machine translation (Shen et al, 2004), dialogue generation (Wen et al, 2015), and ASR (Morbini et al, 2012), our work has several distinctive aspects compared to previous works. First, we do not make extra assumptions, such as availability of precise MR parsers.…”
Section: Related Workmentioning
confidence: 99%
“…This error detection task has commonly relied on handwritten mappings from data values to potential realizations. Such rules were used to compute a Slot Error Rate (SER) metric (Dušek et al, 2019;Juraska et al, 2019;Moryossef et al, 2019a). For instance, Dušek et al (2019) use SER for reranking beam elements during decoding in an attention-based sequence-to-sequence model on the Cleaned E2E dataset.…”
Section: Related Workmentioning
confidence: 99%