Advances in Neural Information Processing Systems 19 2007
DOI: 10.7551/mitpress/7503.003.0195
|View full text |Cite
|
Sign up to set email alerts
|

A Scalable Machine Learning Approach to Go

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
47
0
1

Year Published

2009
2009
2020
2020

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 26 publications
(49 citation statements)
references
References 7 publications
1
47
0
1
Order By: Relevance
“…However, these strategies sacrifice the expressivity needed to capture all-pair interactions among arbitrary nodes. Another recent work [46] proposes kernelized message passing that approximates the all-pair attention with linear complexity. However, this scheme introduces random feature maps that can lead to training instability.…”
Section: Preliminary and Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…However, these strategies sacrifice the expressivity needed to capture all-pair interactions among arbitrary nodes. Another recent work [46] proposes kernelized message passing that approximates the all-pair attention with linear complexity. However, this scheme introduces random feature maps that can lead to training instability.…”
Section: Preliminary and Related Workmentioning
confidence: 99%
“…Incorporation of Structural Information. For accommodating the prior information of the input graph G, existing models tend to use positional encodings [32], edge regularization loss [46] or augmenting the Transformer layers with GNNs [47]. Here we resort to a simple-yet-effective scheme that combines Z with the propagated embeddings by GNNs at the output layer:…”
Section: Simplifying and Empowering Transformers On Large Graphsmentioning
confidence: 99%
See 3 more Smart Citations