2021
DOI: 10.1109/jas.2021.1004201
|View full text |Cite
|
Sign up to set email alerts
|

Variational Gridded Graph Convolution Network for Node Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(6 citation statements)
references
References 36 publications
0
6
0
Order By: Relevance
“…In the above equation, H (1) denotes the output of the first layer and the input of the second layer; H (2) denotes the output of the second layer; ReLU denotes the activation function; X denotes the input of the first layer neural network; W denotes the weight of the graph neural network; W (1) denotes the weight of the first layer neural network; W (2) denotes the weight of the second layer neural network. Â is a single-layer convolution operation, which can be interpreted as a convolution kernel, as shown in Equation (8).…”
Section: Graph Convolution Layermentioning
confidence: 99%
See 3 more Smart Citations
“…In the above equation, H (1) denotes the output of the first layer and the input of the second layer; H (2) denotes the output of the second layer; ReLU denotes the activation function; X denotes the input of the first layer neural network; W denotes the weight of the graph neural network; W (1) denotes the weight of the first layer neural network; W (2) denotes the weight of the second layer neural network. Â is a single-layer convolution operation, which can be interpreted as a convolution kernel, as shown in Equation (8).…”
Section: Graph Convolution Layermentioning
confidence: 99%
“…where H (2) is the input to the output layer and Z is the classification result of the model. Labelling the samples allows the graph convolution model to evaluate the corresponding cross-entropy loss function for semi-supervised node classification as a whole.…”
Section: Output Layermentioning
confidence: 99%
See 2 more Smart Citations
“…Whereas, the skip-gram method predicts the surrounding words with central words, and the time complexity is O(KV), and the training time is longer than CBOW. The CBOW model training aims to maximize the probability of central word occurrence [11] . After the CBOW model training, the word vector matrix can be obtained, and this word vector matrix can be used as the input for deep learning.…”
Section: Word2vec Modelmentioning
confidence: 99%