2020
DOI: 10.1609/aaai.v34i05.6243
|View full text |Cite
|
Sign up to set email alerts
|

Graph Transformer for Graph-to-Sequence Learning

Abstract: The dominant graph-to-sequence transduction models employ graph neural networks for graph representation learning, where the structural information is reflected by the receptive field of neurons. Unlike graph neural networks that restrict the information exchange between immediate neighborhood, we propose a new model, known as Graph Transformer, that uses explicit relation encoding and allows direct communication between two distant nodes. It provides a more efficient way for global graph structure modeling. E… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
163
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 170 publications
(164 citation statements)
references
References 20 publications
0
163
0
1
Order By: Relevance
“…configurations of our model are superior to the baselines by a significant margin. Noticeably, DCGCN and graph transformer are strong baselines, delivering SOTA performance across tasks such as AMRto-text generation and syntax-based neural machine translation (Guo et al, 2019;Cai and Lam, 2019). We believe the larger number of edge types in our task impairs their capability.…”
Section: Methodsmentioning
confidence: 98%
See 2 more Smart Citations
“…configurations of our model are superior to the baselines by a significant margin. Noticeably, DCGCN and graph transformer are strong baselines, delivering SOTA performance across tasks such as AMRto-text generation and syntax-based neural machine translation (Guo et al, 2019;Cai and Lam, 2019). We believe the larger number of edge types in our task impairs their capability.…”
Section: Methodsmentioning
confidence: 98%
“…In the "Modified GraphRNN" baseline (iii), we use the breadth-first-search (BFS) based node ordering to flatten the graph 4 , and use RNNs as the encoders (You et al, 2018) and a decoder similar to our systems. In the final two baselines, "Graph Transformer" (iv) and "Deep Convolutional Graph Networks" (DCGCN) (v), we use the Graph Transformers (Cai and Lam, 2019) and Deep Convolutional Graph Networks (Guo et al, 2019) to encode the source graph (the decoder is identical to ours).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We take a graph Transformer model (Koncel-Kedziorski et al, 2019;Zhu et al, 2019;Cai and Lam, 2020;Wang et al, 2020) as our baseline. Previous work has proposed several variations of graph-Transformer.…”
Section: Baseline: Graph Transformermentioning
confidence: 99%
“…In particular, graph neural networks (Beck et al, 2018;Song et al, 2018;Guo et al, 2019) and richer graph representations (Damonte and Cohen, 2019;Hajdik et al, 2019;Ribeiro et al, 2019) have been shown to give better performances than RNN-based models (Konstas et al, 2017) on linearized graphs. Subsequent work exploited graph Transformer (Zhu et al, 2019;Cai and Lam, 2020;Wang et al, 2020), achieving better performances by directly modeling the intercorrelations between distant node pairs with relation-aware global communication. Despite the progress on the encoder side, the current stateof-the-art models use a rather standard decoder: it functions as a language model, where each word is generated given only the previous words.…”
Section: Introductionmentioning
confidence: 99%