“…Modern approaches to data-to-text generation have shown great promise (Lebret et al, 2016;Mei et al, 2016;Perez-Beltrachini and Lapata, 2018;Puduppully et al, 2019;Wiseman et al, 2017) thanks to the use of large-scale datasets and neural network models which are trained end-toend based on the very successful encoder-decoder architecture (Bahdanau et al, 2015). In contrast to traditional methods which typically implement pipeline-style architectures (Reiter and Dale, 2000) with modules devoted to individual generation components (e.g., content selection or lexical choice), neural models have no special-purpose mechanisms for ensuring how to best generate a text.…”