2012 IEEE 12th International Conference on Data Mining 2012
DOI: 10.1109/icdm.2012.72
|View full text |Cite
|
Sign up to set email alerts
|

Towards Active Learning on Graphs: An Error Bound Minimization Approach

Abstract: Abstract-Active learning on graphs has received increasing interest in the past years. In this paper, we propose a nonadaptive active learning approach on graphs, based on generalization error bound minimization. In particular, we present a data-dependent error bound for a graph-based learning method, namely learning with local and global consistency (LLGC). We show that the empirical transductive Rademacher complexity of the function class for LLGC provides a natural criterion for active learning. The resulti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
66
0
1

Year Published

2015
2015
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 50 publications
(68 citation statements)
references
References 8 publications
1
66
0
1
Order By: Relevance
“…Active learning on graphs is an important technique that applies to many problems, such as Random Walk. Gu and Han [2012] proposed a nonadaptive active learning approach on graphs, based on generalization error bound minimization. When considering the traveling person's trajectories and decision making, we cannot ignore that there are person groups moving together.…”
Section: Related Workmentioning
confidence: 99%
“…Active learning on graphs is an important technique that applies to many problems, such as Random Walk. Gu and Han [2012] proposed a nonadaptive active learning approach on graphs, based on generalization error bound minimization. When considering the traveling person's trajectories and decision making, we cannot ignore that there are person groups moving together.…”
Section: Related Workmentioning
confidence: 99%
“…Other algorithms exist that use a measure of confidence for the labels currently given, such as entropy [13] or variance [14]. Finally, the expected error reduction [15–18] algorithms come from the fact that the measure of performance is mostly the risk and that it makes more sense to minimize it directly rather than some other indirect criteria. Our work belongs to this last category.…”
Section: Related Workmentioning
confidence: 99%
“…In the data-independent case, the optimal allocation does not depend on the samples received so far. It can be related to one-shot active learning, as defined in [18], in which the allocation of the budget is decided before sampling any instances. In the data-dependent case, the label given by the classifier at time t is also considered.…”
Section: Independent Clustersmentioning
confidence: 99%
See 1 more Smart Citation
“…Notice that the key quantity is the posterior marginal distribution P Yq=y,y (Y i = y ) in computing (6) and P y (Y q = y) in (7). An efficient computation of the posterior marginal would lead to an algorithm for PREDICT due to (3), and also to an algorithm for QUERY due to (5).…”
Section: Expected Error Minimization (Eem)mentioning
confidence: 99%