2019
DOI: 10.48550/arxiv.1904.01098
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Unsupervised Inductive Graph-Level Representation Learning via Graph-Graph Proximity

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(14 citation statements)
references
References 33 publications
0
14
0
Order By: Relevance
“…where is a hyper-parameter controlling the contributions of the two parts. Note that, our solution is different from existing graph neural networks [3,26] in two aspects. First, our group representation learning network is hierarchical, which first learns the group members' personal preferences in the lower-layer (i.e., IPM) and then infers groups' representations in the higher-layer (i.e., HRL).…”
Section: Hyperedge Embedding-based Group Representation Learningmentioning
confidence: 83%
See 1 more Smart Citation
“…where is a hyper-parameter controlling the contributions of the two parts. Note that, our solution is different from existing graph neural networks [3,26] in two aspects. First, our group representation learning network is hierarchical, which first learns the group members' personal preferences in the lower-layer (i.e., IPM) and then infers groups' representations in the higher-layer (i.e., HRL).…”
Section: Hyperedge Embedding-based Group Representation Learningmentioning
confidence: 83%
“…To integrate the group similarity based on common members in the learning process of hyperedge embedding, we devise a GNN-based hyperedge embedding model, called HRL. As OGs tend to be formed by chance [25,65], HRL also adopts an inductive graph embedding method [3,26] as its building block to generate embeddings for hyperedges, where a weighted feature aggregation scheme is proposed to account for the similarity between two groups.…”
Section: Hyperedge Embedding-based Group Representation Learningmentioning
confidence: 99%
“…Figure 4 contrasts the original point cloud with the points reconstructed from each pooling method while the connectivity is unchanged; the figures for all point clouds are available in the supplementary Table 3: MSE (values in scale of 10 −3 ) in the autoencoder experiment. 4 The Rank row indicates the average ranking of the methods across all datasets. material.…”
Section: Discussionmentioning
confidence: 99%
“…Other notable works include those of Diehl [17], Bodnar et al [8], and a broad group of global pooling methods (cf. Section 4) that reduce a whole graph to a single vector [33,58,51,41,13,2,54,4].…”
Section: Pooling In Graph Neural Networkmentioning
confidence: 99%
“…After feature graphs of both modalities are extracted, we embed them into vector forms using an attention mechanism called multi-scale node attention [31]. Given a graph G having N nodes {h 1 , ..., h N } and M edges {r 1 , ..., r M } where h i and r j ∈ R d is a feature of a node and an edge accordingly, the embedded vector of G, noted as a G ∈ R 2d , is obtained by the following formula:…”
Section: Graph Embeddingmentioning
confidence: 99%