2022
DOI: 10.21203/rs.3.rs-1835327/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

SSAR-GNN: Self-Supervised Artist Recommendation with Graph Neural Networks

Abstract: Artist recommendation plays a vital role in the artist domain. Accurate recommendation can help avoid ineffective searches and acquire comprehensive knowledge regarding relationships among artists. However, existing studies mainly focus on artists themselves or artistic works. They are incapable of exploring the relationships among artists in an effective way. In this paper, we study the problem of artist recommendation for the first time. We propose a artist dataset to analyze the similarity relationship from… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(15 citation statements)
references
References 14 publications
0
14
0
Order By: Relevance
“…We follow the public data splits as [14,42]. We compare GraphMAE2 with state-of-the-art self-supervised graph learning methods, including contrastive methods, DGI [42], MVGRL [14], GRACE [57], BGRL [39], InfoGCL [48], CCA-SSG [54], and GGD [55] as well as generative methods GAE [24], Graph-MAE [18]. For the evaluation, we employ the linear probing mentioned above and report the average performance of accuracy on the test nodes based on 20 random initialization.…”
Section: Evaluating On Small-scale Datasetsmentioning
confidence: 99%
See 3 more Smart Citations
“…We follow the public data splits as [14,42]. We compare GraphMAE2 with state-of-the-art self-supervised graph learning methods, including contrastive methods, DGI [42], MVGRL [14], GRACE [57], BGRL [39], InfoGCL [48], CCA-SSG [54], and GGD [55] as well as generative methods GAE [24], Graph-MAE [18]. For the evaluation, we employ the linear probing mentioned above and report the average performance of accuracy on the test nodes based on 20 random initialization.…”
Section: Evaluating On Small-scale Datasetsmentioning
confidence: 99%
“…BGRL [39] uses an online encoder and a target encoder to contrast two augmented versions without negative samples. CCA-SSG [54] leverages a feature-level objective for graph SSL, inspired by Canonical Correlation Analysis methods.…”
Section: Graph Self-supervised Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…We consider both contrastive methods and generative methods as baselines. Node-level GCL baselines are compared in the node classification task, including DGI (Velickovic et al 2019), MVGRL (Hassani and Ahmadi 2020), GRACE, BGRL (Thakoor et al 2022), InfoGCL (Xu et al 2021), and CCA-SSG (Zhang et al 2021). In graph classification task, compared graph-level GCL baselines are Graph2vec (Narayanan et al 2017), InfoGraph (Sun et al 2020), GraphCL, JOAO (You et al 2021), GCC, MVGRL, and InfoGCL.…”
Section: Evaluation Setupsmentioning
confidence: 99%