Proceedings of the 30th ACM International Conference on Information &Amp; Knowledge Management 2021
DOI: 10.1145/3459637.3482477
|View full text |Cite
|
Sign up to set email alerts
|

Semi-Supervised and Self-Supervised Classification with Multi-View Graph Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 37 publications
0
4
0
Order By: Relevance
“…Multiple datasets demonstrate that the proposed model is significantly superior to the conventional graph neural network model. Based on the remarkable success of graph neural network in processing graph structure data, Yuan et al [7] designed GNN with strong representation capability in response to the node classification problem. However, the depth model has a problem with overfitting, so a novel concept of aggregating more useful information based on multiple views without depth structure is proposed, and a large number of node classification experiments are conducted on six public datasets, demonstrating the superiority of the proposed model over the most recent methods.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Multiple datasets demonstrate that the proposed model is significantly superior to the conventional graph neural network model. Based on the remarkable success of graph neural network in processing graph structure data, Yuan et al [7] designed GNN with strong representation capability in response to the node classification problem. However, the depth model has a problem with overfitting, so a novel concept of aggregating more useful information based on multiple views without depth structure is proposed, and a large number of node classification experiments are conducted on six public datasets, demonstrating the superiority of the proposed model over the most recent methods.…”
Section: Related Workmentioning
confidence: 99%
“…The attention coefficient is then normalized to yield the final attention coefficienta ij . As indicated by Equation (7). Finally, the features are then weighted and summed in accordance with the attention coefficient.…”
Section: ) Graph Attention Layermentioning
confidence: 99%
“…For example, Wang et al (2020) proposed AM-GCN, which constructed two views from network structure and node features, respectively, and then used an attention mechanism to automatically learn the weights of the views. Yuan et al (2021) designed three complementary views and adopted convolutional operations to learn view embeddings. Finally, an attention mechanism was also used to fuse node representations for the classification task.…”
Section: Introductionmentioning
confidence: 99%
“…Although the above methods are effective, there is room for improvement. For instance, AM-GCN only constructed two views, and Yuan et al (2021) used convolution operations to simply aggregate node features and topological information, which limits the model to capture important information. Therefore, we constructed three views to learn the comprehensive drug interaction relationships.…”
Section: Introductionmentioning
confidence: 99%