Twenty-First International Conference on Machine Learning - ICML '04 2004
DOI: 10.1145/1015330.1015429
|View full text |Cite
|
Sign up to set email alerts
|

Semi-supervised learning using randomized mincuts

Abstract: In many application domains there is a large amount of unlabeled data but only a very limited amount of labeled training data. One general approach that has been explored for utilizing this unlabeled data is to construct a graph on all the data points based on distance relationships among examples, and then to use the known labels to perform some type of graph partitioning. One natural partitioning to use is the minimum cut that agrees with the labeled data (Blum & Chawla, 2001), which can be thought of as giv… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
111
0
6

Year Published

2006
2006
2022
2022

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 208 publications
(117 citation statements)
references
References 12 publications
0
111
0
6
Order By: Relevance
“…The small portion of vertices carrying seed labels are then harnessed via graph partition or information propagation to predict the labels for the unlabeled vertices. For instance, the graph mincuts approach formulated GSSL as a graph cut problem [3], [4]. Other GSSL methods such as graph transduction formulated GSSL as a regularized function estimation over the graph.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The small portion of vertices carrying seed labels are then harnessed via graph partition or information propagation to predict the labels for the unlabeled vertices. For instance, the graph mincuts approach formulated GSSL as a graph cut problem [3], [4]. Other GSSL methods such as graph transduction formulated GSSL as a regularized function estimation over the graph.…”
Section: Introductionmentioning
confidence: 99%
“…The weighted graph, producing the optimal label prediction function, essentially propagates the initial label information from the labeled samples to the vast amount of unlabeled ones. Many popular GSSL algorithms including graph cuts [3], [4], [25], [31], graph-based random walks [1], [45], manifold regularization [2], [42], and graph transduction [61], [64] have been proposed. Comprehensive surveys of these methods can be found in [6] and [63].…”
Section: Introductionmentioning
confidence: 99%
“…Graph-based semi-supervised approaches make use of dependencies introduced between the labels of nearby examples on a constructed graph [22], [29]. These models train to encourage nearby data points to have the same class labels.…”
Section: B Semi-supervised Learningmentioning
confidence: 99%
“…This approach guarantees that the number of mistakes is bounded by a quantity that depends linearly on the cutsize Ī¦ G (y). Further results involving the prediction of node labels in graphs with known structure include [5,6,7,8,9,10,11,12,13,14]. Since all these papers assume knowledge of the entire graph in advance, the techniques proposed for transductive binary prediction do not have any mechanism for guiding the exploration of the graph, hence they do not work well on the exploration/prediction problem studied in this work.…”
Section: Related Workmentioning
confidence: 99%