2018
DOI: 10.48550/arxiv.1812.08972
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

COSINE: Compressive Network Embedding on Large-scale Information Networks

Abstract: There is recently a surge in approaches that learn low-dimensional embeddings of nodes in networks. As there are many large-scale real-world networks, it's inefficient for existing approaches to store amounts of parameters in memory and update them edge after edge. With the knowledge that nodes having similar neighborhood will be close to each other in embedding space, we propose COSINE (COmpresSIve NE) algorithm which reduces the memory footprint and accelerates the training process by parameters sharing amon… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
5
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 42 publications
0
5
0
Order By: Relevance
“…Parameter sharing, i.e., model parallelism, is another approach to deal with very large-scale datasets. COSINE and Py-Torch-BigGraph (PGB) [23], [54] utilizes parameter sharing approach. These models generate non-overlapping partitions, having distinct vertices.…”
Section: Large-scale Network Embedding Frameworkmentioning
confidence: 99%
“…Parameter sharing, i.e., model parallelism, is another approach to deal with very large-scale datasets. COSINE and Py-Torch-BigGraph (PGB) [23], [54] utilizes parameter sharing approach. These models generate non-overlapping partitions, having distinct vertices.…”
Section: Large-scale Network Embedding Frameworkmentioning
confidence: 99%
“…Manifold learning models are usually designed for general usage, which may ignore the unique characteristics in the network topology. Most existing network embedding models focus on encapsulating the topology information into the node representations [3,11,18,20,35]. The motivation is that nodes with similar topology structures (e.g., many common neighbors) should be distributed closely in the learned latent space.…”
Section: Related Workmentioning
confidence: 99%
“…Although a lot of network embedding models have been proposed, they usually suffer from high memory usage. Recently some works have focused on learning memory saving embeddings [4,25,35] . DNE [25] learned binary codings as the node embeddings by adding binary subjections into the matrix factorization, which suffered from undesirable embedding quality.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Both of these works only focus on improving embedding quality without improving the scalability. Later,Zhang et al (2018b); Akbas & Aktas (2019) attempt to improve graph embedding scalability by only embedding on the coarsest graph. However, their approaches lack proper refinement methods to generate high-quality embeddings of the original graph Liang et al (2018).…”
mentioning
confidence: 99%