2021
DOI: 10.1038/s41598-021-85826-x
|View full text |Cite
|
Sign up to set email alerts
|

Discovering latent node Information by graph attention network

Abstract: In this paper, we propose graph attention based network representation (GANR) which utilizes the graph attention architecture and takes graph structure as the supervised learning information. Compared with node classification based representations, GANR can be used to learn representation for any given graph. GANR is not only capable of learning high quality node representations that achieve a competitive performance on link prediction, network visualization and node classification but it can also extract mean… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(13 citation statements)
references
References 28 publications
0
13
0
Order By: Relevance
“…In node-level modelling, for all six models, the RGCN model achieved the best performance for multilabel classifcation prediction for both T and P, with accuracy rates of 0.8312 ± 0.0220 and 0.8356 ± 0.0404, respectively. Tis suggests that the graph neural network model has an advantage in node-level prediction [75]. We also performed parameter optimization experiments on the RotatE model when performing the M-T link prediction task, and the best performance was achieved when setting the epoch, embedding dimension, and learning rate to 50, 64, and 0.001, respectively.…”
Section: Discussionmentioning
confidence: 99%
“…In node-level modelling, for all six models, the RGCN model achieved the best performance for multilabel classifcation prediction for both T and P, with accuracy rates of 0.8312 ± 0.0220 and 0.8356 ± 0.0404, respectively. Tis suggests that the graph neural network model has an advantage in node-level prediction [75]. We also performed parameter optimization experiments on the RotatE model when performing the M-T link prediction task, and the best performance was achieved when setting the epoch, embedding dimension, and learning rate to 50, 64, and 0.001, respectively.…”
Section: Discussionmentioning
confidence: 99%
“…arrival in Google Maps 25 to discovering latent node information 26 . The increasing activity in the field of GNNs has resulted into dozens of architectures being proposed in the past few years.…”
Section: Methodsmentioning
confidence: 99%
“…We used two classical GNNs in the basic graph neural network model, which are GraphSAGE [36] and GAT [37]. Equations 10 and 11 are the core formulations of the GraphSAGE model.…”
Section: ) Graph Neural Network Modelmentioning
confidence: 99%
“…The GAT model has a similar algorithmic process to the GraphSAGE model, requiring a process of aggregating neighbors and node updates. The difference is that weights are introduced in the process of aggregating neighbors and the information about the neighbors is weighted [37]. Equations 12, 13 and 14 are the core formulas of the GAT model.…”
mentioning
confidence: 99%