2019
DOI: 10.1162/tacl_a_00269
|View full text |Cite
|
Sign up to set email alerts
|

Densely Connected Graph Convolutional Networks for Graph-to-Sequence Learning

Abstract: We focus on graph-to-sequence learning, which can be framed as transducing graph structures to sequences for text generation. To capture structural information associated with graphs, we investigate the problem of encoding graphs using graph convolutional networks (GCNs).Unlike various existing approaches where shallow architectures were used for capturing local structural information only, we introduce a dense connection strategy, proposing a novel Densely Connected Graph Convolutional Networks (DCGCNs). Such… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
129
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 121 publications
(129 citation statements)
references
References 29 publications
0
129
0
Order By: Relevance
“…As a baseline (S2S), we train an attention-based encoderdecoder model with copy and coverage mechanisms, and use a linearized version of the graph generated by depth-first traversal order as input. We compare our models against several state-ofthe-art results reported on the two datasets (Konstas et al, 2017;Song et al, 2018;Beck et al, 2018;Damonte and Cohen, 2019;Cao and Clark, 2019;Guo et al, 2019).…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…As a baseline (S2S), we train an attention-based encoderdecoder model with copy and coverage mechanisms, and use a linearized version of the graph generated by depth-first traversal order as input. We compare our models against several state-ofthe-art results reported on the two datasets (Konstas et al, 2017;Song et al, 2018;Beck et al, 2018;Damonte and Cohen, 2019;Cao and Clark, 2019;Guo et al, 2019).…”
Section: Resultsmentioning
confidence: 99%
“…Damonte and Cohen (2019) show that off-theshelf GCNs cannot achieve good performance for AMR-to-text generation. To tackle this issue, Guo et al (2019) introduce dense connectivity to GNNs in order to integrate both local and global features, achieving good results on the task. Our work is related to Damonte and Cohen (2019), that use stacking of GCN and LSTM layers to improve the model capacity and employ anonymization.…”
Section: Related Workmentioning
confidence: 99%
“…LSTM 22.00 -GRN (Song et al, 2018) 23.28 -DCGCN (Guo et al, 2019) 25.70 -RA-Trans-SA (Zhu et al, 2019) Table 1: Main test results on LDC2015E86. Numbers such as "2M" means the number of extra silver data being used, and "ensemble" indicates model ensemble.…”
Section: Model Bleu Timementioning
confidence: 99%
“…Data-to-text is the task of expressing the components (attributes and values) of meaning representation (MR) as human-readable natural sentences. Previous work in this area include templates (Reiter, 1995), rules (Reiter et al, 2005), pipelines (Reiter, 2007;Reiter and Dale, 1997), probabilistic models (Liang et al, 2009) and more recently end-to-end as well as neural-based methods (Wen et al, 2015;Mei et al, 2016;Dušek and Jurcicek, 2016;Lampouras and Vlachos, 2016;Dušek et al, 2020;Wiseman et al, 2017;Gong, 2018;Chen and Mooney, 2008;Reiter, 2017;Lebret et al, 2016;Distiawan et al, 2018;Gehrmann et al, 2018;Marcheggiani and Perez-Beltrachini, 2018;Guo et al, 2019b;Zhao et al, 2020). In our work, we use the state-of-the-art model from Zhao et al (2020) as our baseline.…”
Section: Data-to-text Generationmentioning
confidence: 99%