2004
DOI: 10.1007/978-3-540-30115-8_48
|View full text |Cite
|
Sign up to set email alerts
|

Exploiting Unlabeled Data in Content-Based Image Retrieval

Abstract: Abstract. In this paper, the Ssair (Semi-Supervised Active Image Retrieval) approach, which attempts to exploit unlabeled data to improve the performance of content-based image retrieval (Cbir), is proposed. This approach combines the merits of semi-supervised learning and active learning. In detail, in each round of relevance feedback, two simple learners are trained from the labeled data, i.e. images from user query and user feedback. Each learner then classifies the unlabeled images in the database and pass… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
39
0

Year Published

2005
2005
2021
2021

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 57 publications
(39 citation statements)
references
References 15 publications
0
39
0
Order By: Relevance
“…Another co-training style algorithm which uses more than two learners has been presented by Zhou and Goldman [47]. Some variants of co-training [48], [49] which combine semi-supervised learning with active learning and do not require different views, have been applied to content-based image retrieval, where images provided by the user in the query and relevance feedbacks are regarded as labeled examples while the images existing in the image database are regarded as unlabeled examples.…”
Section: Semi-supervised Learningmentioning
confidence: 99%
See 3 more Smart Citations
“…Another co-training style algorithm which uses more than two learners has been presented by Zhou and Goldman [47]. Some variants of co-training [48], [49] which combine semi-supervised learning with active learning and do not require different views, have been applied to content-based image retrieval, where images provided by the user in the query and relevance feedbacks are regarded as labeled examples while the images existing in the image database are regarded as unlabeled examples.…”
Section: Semi-supervised Learningmentioning
confidence: 99%
“…This implies that the conditional independence [8] or even the weak dependence [2] between the two views is not needed, at least, for iterative co-training which is actually the working routine taken by many co-training style algorithms [20], [47], [48], [49], [51]. In fact, the assumption of two sufficient views is too strong that Zhou et al [53] have shown that when this assumption can be met, semi-supervised learning given only one labeled example is feasible.…”
Section: Semi-supervised Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…Here the main difficulty lies in the gap between the high-level image semantics and the low-level image features, due to the rich content but subjective semantics of an image. Although much endeavor has been devoted to bridging this gap [4] [11], it remains an unsolved problem at present. Nevertheless, many good CBIR systems have already been developed.…”
Section: Introductionmentioning
confidence: 99%