Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.169
|View full text |Cite
|
Sign up to set email alerts
|

Lightweight, Dynamic Graph Convolutional Networks for AMR-to-Text Generation

Abstract: AMR-to-text generation is used to transduce Abstract Meaning Representation structures (AMR) into text. A key challenge in this task is to efficiently learn effective graph representations. Previously, Graph Convolution Networks (GCNs) were used to encode input AMRs, however, vanilla GCNs are not able to capture non-local information and additionally, they follow a local (first-order) information aggregation scheme. To account for these issues, larger and deeper GCN models are required to capture more complex … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(9 citation statements)
references
References 34 publications
(61 reference statements)
0
9
0
Order By: Relevance
“…In LDC2017T10, STRUCTADAPT-RGCN achieves a BLEU score of 44.03 training only 6.3% of the T5 base parameters, outperforming Ribeiro et al (2020b) which fine-tunes T5 updating significantly more parameters. STRUCTADAPT also achieves stateof-the-art performance on LDC2020T02, considerably improving over Zhang et al (2020b). Overall, the results indicate that explicitly considering the graph structure using an adapter mechanism is effective for AMR-to-text generation, significantly reducing the number of trained parameters while improving generation quality.…”
Section: Resultsmentioning
confidence: 78%
See 2 more Smart Citations
“…In LDC2017T10, STRUCTADAPT-RGCN achieves a BLEU score of 44.03 training only 6.3% of the T5 base parameters, outperforming Ribeiro et al (2020b) which fine-tunes T5 updating significantly more parameters. STRUCTADAPT also achieves stateof-the-art performance on LDC2020T02, considerably improving over Zhang et al (2020b). Overall, the results indicate that explicitly considering the graph structure using an adapter mechanism is effective for AMR-to-text generation, significantly reducing the number of trained parameters while improving generation quality.…”
Section: Resultsmentioning
confidence: 78%
“…3 Dev BLEU is used for model selection. Following recent work in AMR-to-text generation (Mager et al, 2020;Zhang et al, 2020b), we use LDC2017T10 and LDC2020T02 corpora for evaluation of the proposed model. An instance in both datasets consists of a sentence annotated with its corresponding AMR graph.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…BLEU is used for the stopping criterion. Following recent work (Mager et al, 2020;Zhang et al, 2020b), we evaluate our proposed models on LDC2017T10 and LDC2020T02 corpora.…”
Section: Methodsmentioning
confidence: 99%
“…Fine-tuning for Graph-to-text Generation. While previous approaches (Song et al, 2018;Ribeiro et al, 2019;Cai and Lam, 2020;Schmitt et al, 2021;Zhang et al, 2020b) have shown that explicitly encoding the graph structure is beneficial, fine-tuning PLMs on linearized structured data has established a new level of performance in data-to-text generation (Nan et al, 2021;Kale, 2020;. Our work can be seen as integrating the advantage of both graph structure encoding and PLMs, using a novel adapter module.…”
Section: Related Workmentioning
confidence: 98%