Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1052
|View full text |Cite
|
Sign up to set email alerts
|

Neural data-to-text generation: A comparison between pipeline and end-to-end architectures

Abstract: Traditionally, most data-to-text applications have been designed using a modular pipeline architecture, in which non-linguistic input data is converted into natural language through several intermediate transformations. By contrast, recent neural models for data-to-text generation have been proposed as end-to-end approaches, where the non-linguistic input is rendered in natural language with much less explicit intermediate representations in between. This study introduces a systematic comparison between neural… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
92
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
5

Relationship

1
9

Authors

Journals

citations
Cited by 83 publications
(92 citation statements)
references
References 30 publications
0
92
0
Order By: Relevance
“…text planning, surface realization, referring expression generation), with a concomitant focus on text output quality, for which an intrinsic evaluation may be sufficient. However, we are starting to see a swing back towards a full pipeline approach with separate neural modules handling sub-tasks (Castro Ferreira et al, 2019), which may also cause a resurgence of extrinsic evaluation.…”
Section: Intrinsic and Extrinsic Evaluationmentioning
confidence: 99%
“…text planning, surface realization, referring expression generation), with a concomitant focus on text output quality, for which an intrinsic evaluation may be sufficient. However, we are starting to see a swing back towards a full pipeline approach with separate neural modules handling sub-tasks (Castro Ferreira et al, 2019), which may also cause a resurgence of extrinsic evaluation.…”
Section: Intrinsic and Extrinsic Evaluationmentioning
confidence: 99%
“…Specifically, for Chinese and Japanese, we require a proper method to tokenize/detokenize the results produced by our approach. Moreover, we aim to design the task based on novel pipeline architectures for Natural Language Generation (Ferreira et al, 2019…”
Section: Resultsmentioning
confidence: 99%
“…Works like Moryossef et al (2019a,b) and Castro Ferreira et al (2019) show that treating various planning tasks as separate components in a pipeline, where the components themselves are implemented with neural models, improves the overall quality and semantic correctness of generated utterances relative to a completely end-to-end neural NLG model. However, they do not test the systematicty of the neural generation components, i.e.…”
Section: Related Workmentioning
confidence: 99%