2020
DOI: 10.1007/978-3-030-59416-9_28
|View full text |Cite
|
Sign up to set email alerts
|

Role-Oriented Graph Auto-encoder Guided by Structural Information

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 15 publications
(11 citation statements)
references
References 22 publications
0
11
0
Order By: Relevance
“…For example, DRNE [125] develops a deep learning method with a normalized long short-term memory (LSTM) layer to learn regular equivalence by recursively aggregating neighbors' representations for each node. GAS [141] utilizes graph neural networks (GNN) to capture node structure by applying sum-pooling propagation instead of the graph convolutional networks (GCN) [142] to better capture local node structures. For training, GAS extracts features similarly to ReFeX and aggregates them only once.…”
Section: Role Discovery Modelsmentioning
confidence: 99%
“…For example, DRNE [125] develops a deep learning method with a normalized long short-term memory (LSTM) layer to learn regular equivalence by recursively aggregating neighbors' representations for each node. GAS [141] utilizes graph neural networks (GNN) to capture node structure by applying sum-pooling propagation instead of the graph convolutional networks (GCN) [142] to better capture local node structures. For training, GAS extracts features similarly to ReFeX and aggregates them only once.…”
Section: Role Discovery Modelsmentioning
confidence: 99%
“…GAS [68]. Graph Neural Networks have the power to capture structure as they are closely related to Weisfeiler-Lehman (WL) test in some ways [81].…”
Section: Structural Information Reconstruction/guidance Drnementioning
confidence: 99%
“…struc2vec [39], struc2gauss [64], Role2Vec [65] and NODE2BITS [67] are all based on random walk. DRNE [50], GraLSP [70], GAS [68] and RESD [69] pertain to the scope of deep learning. In the subsequent experiments, all the parameters are fine-tuned.…”
Section: Experimental Settingsmentioning
confidence: 99%
See 2 more Smart Citations