2004
DOI: 10.21236/ada461760
|View full text |Cite
|
Sign up to set email alerts
|

Learning Euclidean Embeddings for Indexing and Classification

Abstract: BoostMap is a recently proposed method for efficient approximate nearest neighbor retrieval in arbitrary non-Euclidean spaces with computationally expensive and possibly non-metric distance measures. Database and query objects are embedded into a Euclidean space, in which similarities can be rapidly measured using a weighted Manhattan distance. The key idea is formulating embedding construction as a machine learning task, where AdaBoost is used to combine simple, 1D embeddings into a multidimensional embedding… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2007
2007
2019
2019

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 24 publications
0
6
0
Order By: Relevance
“…The data are ordered by increasing differences of distance. It seems to be an efficient method for obtaining a ranking of all objects in approximate order of similarity to the reference object . For the further statistical evaluation the data of these 200 preselected objects are taken.…”
Section: Resultsmentioning
confidence: 99%
“…The data are ordered by increasing differences of distance. It seems to be an efficient method for obtaining a ranking of all objects in approximate order of similarity to the reference object . For the further statistical evaluation the data of these 200 preselected objects are taken.…”
Section: Resultsmentioning
confidence: 99%
“…For BoostMap, all of the training objects were used as candidates for creating the 1D embeddings. We used 200, 000 triplets to train the BoostMap embedding, as suggested in [1]. The number of BoostMap iterations performed was 250.…”
Section: Comparison With Other Methodsmentioning
confidence: 99%
“…First, we briefly review the complexity of BoostMap and RCMap. For BoostMap, each iteration has cost of O(Ct), where C is the number of candidate objects used for creating 1-dimensional embeddings, and t is the number of triples [1]. With i iterations, the total cost of embedding is O(Cti).…”
Section: Running Timementioning
confidence: 99%
See 1 more Smart Citation
“…In [1] we provide a more extensive discussion of BoostMap, and we explore some extensions that can improve embedding accuracy and efficiency.…”
Section: Discussion and Future Workmentioning
confidence: 99%