2018 IEEE International Conference on Data Mining (ICDM) 2018
DOI: 10.1109/icdm.2018.00071
|View full text |Cite
|
Sign up to set email alerts
|

A United Approach to Learning Sparse Attributed Network Embedding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
2

Relationship

3
6

Authors

Journals

citations
Cited by 44 publications
(16 citation statements)
references
References 23 publications
0
16
0
Order By: Relevance
“…Existing Attributed Networks embedding methods can be categorized as: (a) random walk 41 based methods, (b) adjacency matrix decomposition-based methods, and (c) deep-learning-based methods. Among the random walk-based methods, SANE 42 learns the low dimensional representations by using random walk to generate node sequences and using attention mechanisms to aggregate the information of neighbor nodes. TADW 43 incorporates the DeepWalk and text features into an integrated framework for network representation learning.…”
Section: Attributed Network Embeddingmentioning
confidence: 99%
“…Existing Attributed Networks embedding methods can be categorized as: (a) random walk 41 based methods, (b) adjacency matrix decomposition-based methods, and (c) deep-learning-based methods. Among the random walk-based methods, SANE 42 learns the low dimensional representations by using random walk to generate node sequences and using attention mechanisms to aggregate the information of neighbor nodes. TADW 43 incorporates the DeepWalk and text features into an integrated framework for network representation learning.…”
Section: Attributed Network Embeddingmentioning
confidence: 99%
“…In summary, the flowchart of MAAT framework is as Algorithm 1, which adpots a two-stage solution for the multiobjective optimization [34], [35]. MAAT has the following advantages: (1) it is model-agnostic and suitable for a wide range of CDMs, including those which traditional CAT methods cannot fit in; (2) it optimizes both quality and diversity with novel score functions and has an efficient performance-guaranteed optimization algorithm.…”
Section: F Summarymentioning
confidence: 99%
“…Inspired by the remarkable development of graph neural networks in various domains [13] [40] [27] [39], researchers have noticed the potentials of them for molecular property prediction. Generally, by treating the molecule as a graph, several graph neural networks have been applied [14,24,40] as an architecture that can directly deal with noneuclidean data like graphs.…”
Section: Related Workmentioning
confidence: 99%