Proceedings of the 10th International Conference on Natural Language Generation 2017
DOI: 10.18653/v1/w17-3501
|View full text |Cite
|
Sign up to set email alerts
|

Linguistic realisation as machine translation: Comparing different MT models for AMR-to-text generation

Abstract: In this paper, we study AMR-to-text generation, framing it as a translation task and comparing two different MT approaches (Phrasebased and Neural MT). We systematically study the effects of 3 AMR preprocessing steps (Delexicalisation, Compression, and Linearisation) applied before the MT phase. Our results show that preprocessing indeed helps, although the benefits differ for the two MT models. The implementation of the models are publicly available 1 .

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
38
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
6
2
2

Relationship

1
9

Authors

Journals

citations
Cited by 24 publications
(39 citation statements)
references
References 23 publications
(72 reference statements)
1
38
0
Order By: Relevance
“…Early works on AMR-to-text generation employ statistical methods (Flanigan et al, 2016b;Pourdamghani et al, 2016;Castro Ferreira et al, 2017) and apply linearization of the graph by means of a depth-first traversal. Recent neural approaches have exhibited success by linearising the input graph and using a sequence-to-sequence architecture.…”
Section: Related Workmentioning
confidence: 99%
“…Early works on AMR-to-text generation employ statistical methods (Flanigan et al, 2016b;Pourdamghani et al, 2016;Castro Ferreira et al, 2017) and apply linearization of the graph by means of a depth-first traversal. Recent neural approaches have exhibited success by linearising the input graph and using a sequence-to-sequence architecture.…”
Section: Related Workmentioning
confidence: 99%
“…The model has to associate each attribute with its location in a sentence template. However, S2S models can learn wrong associations between inputs and targets with limited data, which was also shown by Ferreira et al (2017). Additionally, consider that we may see the generated texts for similar inputs: There is an expensive British Restaurant called the Eagle.…”
Section: Learning Latent Sentence Templatesmentioning
confidence: 98%
“…From an NLP perspective, one of the main research problems in this paradigm has become the choice of the graph encoding strategy. The most popular method is linearizing it into a sequence of tokens and encoding using a variant of a recurrent neural network (RNN) (Gardent et al, 2017;Castro Ferreira et al, 2017;Konstas et al, 2017). Another prominent approach is using graph-to-text neural networks (Song et al, 2018;Trisedya et al, 2018).…”
Section: Related Workmentioning
confidence: 99%