2006
DOI: 10.1007/11871842_7
|View full text |Cite
|
Sign up to set email alerts
|

A Selective Sampling Strategy for Label Ranking

Abstract: We propose a novel active learning strategy based on the compression framework of [9] for label ranking functions which, given an input instance, predict a total order over a predefined set of alternatives. Our approach is theoretically motivated by an extension to ranking and active learning of Kääriäinen's generalization bounds using unlabeled data [7], initially developed in the context of classification. The bounds we obtain suggest a selective sampling strategy provided that a sufficiently, yet reasonably… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
7
0

Year Published

2008
2008
2012
2012

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(8 citation statements)
references
References 9 publications
1
7
0
Order By: Relevance
“…Active learning has been actively extended to rank learning [2,15,16,28,45]. Based on uncertainty sampling, [45] selected the most ambiguous document pairs, in which two documents received close scores predicted by the current model, as informative examples.…”
Section: Active Rank Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…Active learning has been actively extended to rank learning [2,15,16,28,45]. Based on uncertainty sampling, [45] selected the most ambiguous document pairs, in which two documents received close scores predicted by the current model, as informative examples.…”
Section: Active Rank Learningmentioning
confidence: 99%
“…target domain) where no or just a few labeled data is available. Active rank learning [2,15,16,28,45] is motivated to proactively select and annotate those most informative set of examples, which are expected that, if labeled, they can maximize the information value to the ranking function.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…We tested the performance of our method (denoted by LossMin) against four baselines: the entropy-based sampling method of [22] (denoted by Entropy), the uncertainty sampling heuristic of [30] (denoted by Uncertain), the divergencebased sampling strategy of [1] (denoted by Diverse), and random sampling (denoted by Random). Entropy method [22] samples the most confusing instances for the current ranker which are identified via estimating the bipartite ranking error [8] that counts an error each time a relevant instance is ranked lower than an irrelevant one.…”
Section: Data and Problem Setupmentioning
confidence: 99%
“…Another recent development in active rank learning is the divergence-based sampling method of (Amini et al, 2006). The proposed method selects the samples at which two different ranking functions maximally disagree.…”
Section: Related Workmentioning
confidence: 99%