2022
DOI: 10.3390/math10081262
|View full text |Cite
|
Sign up to set email alerts
|

Inferring from References with Differences for Semi-Supervised Node Classification on Graphs

Abstract: Following the application of Deep Learning to graphic data, Graph Neural Networks (GNNs) have become the dominant method for Node Classification on graphs in recent years. To assign nodes with preset labels, most GNNs inherit the end-to-end way of Deep Learning in which node features are input to models while labels of pre-classified nodes are used for supervised learning. However, while these methods can make full use of node features and their associations, they treat labels separately and ignore the structu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 20 publications
0
7
0
Order By: Relevance
“…LinkDist series (LinkDistMLP, CoLinkDist, and LinkDist) extract useful features by distilling self‐knowledge from associated couple nodes 46 . 3ference analyzes the transition patterns of node labels on the graph 47 . The performances of these SOTA approaches are improved.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…LinkDist series (LinkDistMLP, CoLinkDist, and LinkDist) extract useful features by distilling self‐knowledge from associated couple nodes 46 . 3ference analyzes the transition patterns of node labels on the graph 47 . The performances of these SOTA approaches are improved.…”
Section: Methodsmentioning
confidence: 99%
“…46 3ference analyzes the transition patterns of node labels on the graph. 47 The performances of these SOTA approaches are improved. However, our proposed GD achieved four out of five top on all graph citation and coauthor data sets.…”
Section: Comparison With More Sota Approachesmentioning
confidence: 99%
“…For Amazon and Coauthor datasets, seven baselines are used as in Table 3. For the MLP with 3-layers, GCN and 3ference, results are obtained from (Luo et al 2022), and a result for DSF comes from (Guo et al 2023). For others, the experiments were performed by randomly splitting the data as 60%/20%/20% for training/validation/testing datasets as in (Luo et al 2022) and replicating it 10 times to obtain mean and standard deviation of the evaluation metric.…”
Section: Semi-supervised Node Classificationmentioning
confidence: 99%
“…For additional datasets in Table 3, the performance of LSAP outperformed the baselines. The results for MLP, GCN and 3ference were adopted from (Luo et al 2022), which reported the best performance out of 10 replicated experiments. We ran the same experiments for GAT, GDC, GraphHeat, Exact and LSAP, and the mean and standard deviation of metrics are given.…”
Section: Semi-supervised Node Classificationmentioning
confidence: 99%
“…Local Representation Distillation It has been demonstrated that the nodes in the graph data have a high probability of belonging to the same class as their adjacent nodes (Luo et al 2022). Therefore, it is intuitive to minimize the embedding distance (i.e., maximizing the embedding similarity) of two adjacent nodes in the student model.…”
Section: Pre-trainingmentioning
confidence: 99%