2007 IEEE 11th International Conference on Computer Vision 2007
DOI: 10.1109/iccv.2007.4408839
|View full text |Cite
|
Sign up to set email alerts
|

Learning Globally-Consistent Local Distance Functions for Shape-Based Image Retrieval and Classification

Abstract: We address the problem of visual category recognition by learning an image-to-image distance function that attempts to satisfy the following property Figure 1. Three images from the Caltech101 data set, two from the dog category, one from the Faces category. We want to learn distance functions between pairs of images such that the distance from j to i (Dji) is smaller than from k to i (D ki ). Triplets like this one form the basis of our learning algorithm.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
276
0

Year Published

2009
2009
2022
2022

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 293 publications
(277 citation statements)
references
References 16 publications
1
276
0
Order By: Relevance
“…In some applications this was shown to be inferior to uniform weighting of the kernels (Noble, 2008). The work of Frome et al (2007) further learns a weighting over local distance function for every image in the training set. Non linear image similarity learning was also studied in the context of dimensionality reduction, as in Hadsell et al (2006).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In some applications this was shown to be inferior to uniform weighting of the kernels (Noble, 2008). The work of Frome et al (2007) further learns a weighting over local distance function for every image in the training set. Non linear image similarity learning was also studied in the context of dimensionality reduction, as in Hadsell et al (2006).…”
Section: Related Workmentioning
confidence: 99%
“…However, in all the above examples, the precise numerical value of pairwise similarity between objects is usually not available. Fortunately, one can often obtain information about the relative similarity of different pairs (Frome et al, 2007), for instance, by presenting people with several object pairs and asking them to select the pair that is most similar. For large scale data, where man-in-the-loop experiments are prohibitively costly, relative similarities can be extracted from analyzing pairs of images that are returned in response to the same text query (Schultz and Joachims, 2004).…”
Section: Introductionmentioning
confidence: 99%
“…In the second strategy, the system learns a feature space mapping (e.g., with LDA) with only those instances close to the test example [17,31], thereby tailoring the representation to the input. In a similar spirit, local metric learning methods use example-specific weights [15,25] or a cluster-specific feature transformation [32], then apply nearest neighbor classification. For all these prior methods, a test case is a new data point, and its neighboring examples are identified by nearest neighbor search (e.g., with Euclidean distance).…”
Section: Related Workmentioning
confidence: 99%
“…Consider for example a content-based image retrieval setting, if the query is an image of a natural scene, color might be important but it might be less important for an indoor scene. Local learning algorithms [4,23,9,8,20] attempt to address this problem by adjusting the parameters of the model to the properties of the training set in different areas of the input space. In the transductive setting, the simplest local algorithm proceeds in two steps.…”
Section: Introductionmentioning
confidence: 99%
“…These types of constraints could be obtained from feedback of users of the retrieval system. In the standard ranking SVMs [18,1,9,8,5], one assumes that a set of elementary pair-wise similarity functions is given and uses the triplets to learn an optimal weigthed combination of these functions. In this global ranking model the same weighted combination is used for all queries independently of where they lie in the query space.…”
Section: Introductionmentioning
confidence: 99%