2022
DOI: 10.1109/tbdata.2017.2757522
|View full text |Cite
|
Sign up to set email alerts
|

FLAG: Faster Learning on Anchor Graph with Label Predictor Optimization

Abstract: Abstract-Knowledge graphs have received intensive research interests. When the labels of most nodes or datapoints are missing, anchor graph and hierarchical anchor graph models can be employed. With an anchor graph or hierarchical anchor graph, we only need to optimize the labels of the coarsest anchors, and the labels of datapoints can be inferred from these anchors in a coarse-tofine manner. The complexity of optimization is therefore reduced to a cubic cost with respect to the number of the coarsest anchors… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 34 publications
0
4
0
Order By: Relevance
“…Graph-based Models label propagation on sparse graph approximate search [110], [227], [232], division and conquer [45], [197]. optimization with anchor graph single layer [132], [201], [228], hierarchical layers [77], [200].…”
Section: Strategies Representative Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Graph-based Models label propagation on sparse graph approximate search [110], [227], [232], division and conquer [45], [197]. optimization with anchor graph single layer [132], [201], [228], hierarchical layers [77], [200].…”
Section: Strategies Representative Methodsmentioning
confidence: 99%
“…After that, hierarchical anchor graphs propose to retain sparse similarities over all instances while keeping a small number of anchors for label inference [200]. In case that the smallest set of anchors still needs to be large and brings considerable computations, FLAG developes label optimizers for further acceleration [77]. Besides, EAGR proposes to perform label smoothness over anchors with pruned adjacency [201].…”
Section: For Graph-based Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…As a result, the computational complexity can be greatly reduced. While there are different ways to build the adjacency matrix S in AGR [24][25][26], we argue that most of them are developed intuitively and lack a probability explanation. In addition, AGR cannot directly infer the class labels of incoming data.…”
Section: Introductionmentioning
confidence: 92%