Proceedings of the 11th International Conference on Natural Language Generation 2018
DOI: 10.18653/v1/w18-6539
|View full text |Cite
|
Sign up to set email alerts
|

Findings of the E2E NLG Challenge

Abstract: This paper summarises the experimental setup and results of the first shared task on end-to-end (E2E) natural language generation (NLG) in spoken dialogue systems. Recent end-to-end generation systems are promising since they reduce the need for data annotation. However, they are currently limited to small, delexicalised datasets. The E2E NLG shared task aims to assess whether these novel approaches can generate better-quality output by learning from a dataset containing higher lexical richness, syntactic comp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
86
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 88 publications
(88 citation statements)
references
References 29 publications
2
86
0
Order By: Relevance
“…We apply pragmatics to encourage output strings from which the input MR can be identified. For our S 0 model, we use a publicly-released neural generation system (Puzikov and Gurevych, 2018) that achieves comparable performance to the best published results in Dušek et al (2018). Abstractive Summarization Our second task is multi-sentence document summarization.…”
Section: Meaning Representationsmentioning
confidence: 99%
See 1 more Smart Citation
“…We apply pragmatics to encourage output strings from which the input MR can be identified. For our S 0 model, we use a publicly-released neural generation system (Puzikov and Gurevych, 2018) that achieves comparable performance to the best published results in Dušek et al (2018). Abstractive Summarization Our second task is multi-sentence document summarization.…”
Section: Meaning Representationsmentioning
confidence: 99%
“…We report the task's five automatic metrics: BLEU (Papineni et al, 2002), NIST (Doddington, 2002), METEOR (Lavie and Agarwal, 2007), ROUGE-L (Lin, 2004) and CIDEr (Vedantam et al, 2015). Table 1 compares the performance of our base S 0 and pragmatic models to the baseline T-Gen system (Dušek and Jurčíček, 2016) and the best previous result from the 20 primary systems evaluated in the E2E challenge (Dušek et al, 2018). The systems obtaining these results encompass a range of approaches: a template system (Puzikov and Gurevych, 2018), a neural model (Zhang et al, 2018), models trained with reinforcement learning (Gong, 2018), and systems using ensembling and reranking (Juraska et al, 2018).…”
Section: Meaning Representationsmentioning
confidence: 99%
“…1 Our model performs sentencelevel content planning for information selection and ordering, and style-controlled surface realization to produce the final generation. We focus on conditional text generation problems (Lebret et al, 2016;Colin et al, 2016;Dušek et al, 2018): As shown in Figure 2, the input to our model consists of a topic statement and a set of keyphrases. The output is a relevant and coherent paragraph to reflect the salient points from the input.…”
Section: Simple Wikipediamentioning
confidence: 99%
“…Traditionally, these two subproblems have been tackled separately. In recent years, neural generation models, especially the encoder-decoder model, solve these two subproblems jointly and have achieved remarkable successes in several benchmarks (Mei et al, 2016;Lebret et al, 2016;Wiseman et al, 2017;Dušek et al, 2018;Nie et al, 2018). Such end-to-end data-to-text models rely on massive parallel pairs of data and text to learn the writing knowledge.…”
Section: Infoboxmentioning
confidence: 99%