2021
DOI: 10.48550/arxiv.2111.04840
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Cold Brew: Distilling Graph Node Representations with Incomplete or Missing Neighborhoods

Abstract: Graph Neural Networks (GNNs) have achieved state of the art performance in node classification, regression, and recommendation tasks. GNNs work well when high-quality and rich connectivity structure is available. However, this requirement is not satisfied in many real world graphs where the node degrees have power-law distributions as many nodes have either fewer or noisy connections. The extreme case of this situation is a node may have no neighbors at all, called Strict Cold Start (SCS) scenario. This forces… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(14 citation statements)
references
References 37 publications
0
14
0
Order By: Relevance
“…GFKD [46] RDD [47] GKD [48] GLNN [49] Distill2Vec [50] MT-GCN [51] TinyGNN [52] GLocalKD [53] SCR [54] ROD [55] EGNN [56] Middle layer LWC-KD [57] MustaD [58] EGAD [59] AGNN [60] Cold Brew [61] PGD [62] OAD [63] CKD [64] BGNN [65] EGSC [66] HSKDM [67] Constructed graph GRL [68] GFL [69] HGKT [70] CPF [71] LSP [16] scGCN [72] MetaHG [73] G-CRD [74] HIRE [75] SKD methods…”
Section: Output Layermentioning
confidence: 99%
See 3 more Smart Citations
“…GFKD [46] RDD [47] GKD [48] GLNN [49] Distill2Vec [50] MT-GCN [51] TinyGNN [52] GLocalKD [53] SCR [54] ROD [55] EGNN [56] Middle layer LWC-KD [57] MustaD [58] EGAD [59] AGNN [60] Cold Brew [61] PGD [62] OAD [63] CKD [64] BGNN [65] EGSC [66] HSKDM [67] Constructed graph GRL [68] GFL [69] HGKT [70] CPF [71] LSP [16] scGCN [72] MetaHG [73] G-CRD [74] HIRE [75] SKD methods…”
Section: Output Layermentioning
confidence: 99%
“…Output layer-based methods using KL divergence are GFKD [46], GLNN [49], Distill2Vec [50], and MT-GCN [51]; middle layer-based methods using KL divergence are MustaD [58], OAD [63], BGNN [65], and HSKDM [67]; constructed graph-based methods using KL divergence are CPF [71], LSP [16], MetaHG [73], and HIRE [75]. Similarly, output layer-based methods using MSE are RDD [47] and SCR [54]; middle layer-based methods using MSE are LWC-KD [57], AGNN [60], Cold Brew [61], and EGSC [66]. Finally, HIRE [75] uses the constructed graph knowledge of a relational metric types to measure the differences between teacher and student models.…”
Section: Graph-based Knowledge Distillation For Graph Neural Networkmentioning
confidence: 99%
See 2 more Smart Citations
“…Graphs are omnipresent and Graphs convolutional networks (GCNs) [1] and their variants [2,3,4,5,6,7,8,9,10] are a powerful family of neural networks which can learn from graph-structured data. GCNs have been enormously successful in numerous real-world applications such as recommendation systems [11,12], social and academic networks [13,11,5,4,14], modeling proteins for drug discovery [15,16,12], computer vision [17,18], etc.…”
Section: Introductionmentioning
confidence: 99%