2004
DOI: 10.1023/b:visi.0000004830.93820.78
|View full text |Cite
|
Sign up to set email alerts
|

Boosting Image Retrieval

Abstract: We present an approach for image retrieval using a very large number of highly selective features and efficient online learning. Our approach is predicated on the assumption that each image is generated by a sparse set of visual "causes" and that

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
205
0
4

Year Published

2005
2005
2012
2012

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 287 publications
(209 citation statements)
references
References 36 publications
0
205
0
4
Order By: Relevance
“…Some algorithms assume the user will give a binary feedback for positive and negative examples [32,46,47]; some only take positive examples [7,38]; some take positive and negative examples with "degree of (ir)relevance" for each [39,62]; some assumes the feedback is only a "comparative judgment" instead of a definite hit or miss [9]; some uses both labeled and unlabeled data for training: Wu et al [58] proposed the D-EM algorithm within a transductive learning framework, and used examples from user feedback (labeled data) as well as other data points (unlabeled data). It performs discriminant analysis inside EM iterations to select a subspace of features, such that the two-class (positive and negative) assumption on the data distributions has better support.…”
Section: User Model: What To Feed Back?mentioning
confidence: 99%
See 2 more Smart Citations
“…Some algorithms assume the user will give a binary feedback for positive and negative examples [32,46,47]; some only take positive examples [7,38]; some take positive and negative examples with "degree of (ir)relevance" for each [39,62]; some assumes the feedback is only a "comparative judgment" instead of a definite hit or miss [9]; some uses both labeled and unlabeled data for training: Wu et al [58] proposed the D-EM algorithm within a transductive learning framework, and used examples from user feedback (labeled data) as well as other data points (unlabeled data). It performs discriminant analysis inside EM iterations to select a subspace of features, such that the two-class (positive and negative) assumption on the data distributions has better support.…”
Section: User Model: What To Feed Back?mentioning
confidence: 99%
“…The number of training examples is small (typically < 20 per round of interaction, depending upon the user's patience and willingness to cooperate) relative to the dimension of the feature space (from dozens to hundreds, or even more), while the number of classes is large for most realworld image databases. For such small sample sizes, some existing learning machines such as support vector machines (SVM) [50] cannot give stable or meaningful results [46,62], unless more training samples can be elicited from the user [47].…”
Section: The Relevance Feedback Problemmentioning
confidence: 99%
See 1 more Smart Citation
“…Boosting, like many machine-learning methods, is entirely data-driven in the sense that the classifier it generates is derived exclusively from the evidence present in the training data itself [Schapire 2003]. Moreover, allowing redundancy and overlapping in the feature set has been proven to be more efficient in recognition and classification tasks than orthogonal features [Tieu and Viola 2004].…”
Section: Introductionmentioning
confidence: 99%
“…Boosting as a mean for classifier combination provides an efficient way for feature selection and combination. It has been efficiently used for online learning of the query features for relevance feedback in image retrieval [Tieu and Viola 2004;Amores et al 2004]. Boosting, like many machine-learning methods, is entirely data-driven in the sense that the classifier it generates is derived exclusively from the evidence present in the training data itself [Schapire 2003].…”
Section: Introductionmentioning
confidence: 99%