Proceedings of the 2018 SIAM International Conference on Data Mining 2018
DOI: 10.1137/1.9781611975321.18
|View full text |Cite
|
Sign up to set email alerts
|

Semi-supervised Embedding in Attributed Networks with Outliers

Abstract: In this paper, we propose a novel framework, called Semi-supervised Embedding in Attributed Networks with Outliers (SEANO), to learn a low-dimensional vector representation that systematically captures the topological proximity, attribute affinity and label similarity of vertices in a partially labeled attributed network (PLAN). Our method is designed to work in both transductive and inductive settings while explicitly alleviating noise effects from outliers. Experimental results on various datasets drawn from… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
59
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 92 publications
(59 citation statements)
references
References 30 publications
0
59
0
Order By: Relevance
“…where w u * v * and w u v are the vectors of weights for edges (u * , v * ) and (u , v ) in the softmax layer, respectively. The denominator is computed by negative sampling [20]. Although the embedding function is learned by training a context inference task, it is still considered as "unsupervised" because the contexts are calculated by sampling on the attributed graph, which is independent of any supervised learning task [21].…”
Section: Unsupervised Loss Function L Umentioning
confidence: 99%
“…where w u * v * and w u v are the vectors of weights for edges (u * , v * ) and (u , v ) in the softmax layer, respectively. The denominator is computed by negative sampling [20]. Although the embedding function is learned by training a context inference task, it is still considered as "unsupervised" because the contexts are calculated by sampling on the attributed graph, which is independent of any supervised learning task [21].…”
Section: Unsupervised Loss Function L Umentioning
confidence: 99%
“…where w u * v * and w u v are the vectors of weights for edges (u * , v * ) and (u , v ) in the softmax layer, respectively. The denominator is computed by negative sampling [14]. Although the embedding function is learned by training a context inference task, it is still considered as "unsupervised" because the contexts are calculated by sampling on the attributed graph, which is independent of any supervised learning task [15].…”
Section: B Static Loss Functionsmentioning
confidence: 99%
“…Temporal loss refers to the loss related to graph dynamics. Different from existing works [14]- [16], the proposed embedding model takes into account graph dynamics. Understanding the temporal dynamics of attributed bipartite graphs is crucial to precisely model the churn behavior.…”
Section: Temporal Loss Functionmentioning
confidence: 99%
See 1 more Smart Citation
“…Most existing work on network embedding either works in the unsupervised fashion, i.e., not considering the label information at all, or is implicitly assuming a balanced setting, i.e., not suitable for the imbalanced setting. More specifically, unsupervised embedding approaches [5], [13], [30], [36] aimed to learn the node representations which preserve the graph topological structure in the feature space; and semi-supervised methods [1], [2], [32] implicitly assumed that the labeled set consisted of roughly equal number of examples from each class. However, in the presence of imbalanced labeled set, these methods will inevitably suffer from imbalanced node-context pairs, resulting in node representations not amiable to the following classification task.…”
Section: Introductionmentioning
confidence: 99%