Procedings of the British Machine Vision Conference 2012 2012
DOI: 10.5244/c.26.86
|View full text |Cite
|
Sign up to set email alerts
|

Hash-Based Support Vector Machines Approximation for Large Scale Prediction

Abstract: How-to train effective classifiers on huge amount of multimedia data is clearly a major challenge that is attracting more and more research works across several communities. Less efforts however are spent on the counterpart scalability issue: how to apply big trained models efficiently on huge non annotated media collections ? In this paper, we address the problem of speeding-up the prediction phase of linear Support Vector Machines via Locality Sensitive Hashing. We propose building efficient hashbased classi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2014
2014
2017
2017

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(1 citation statement)
references
References 18 publications
0
1
0
Order By: Relevance
“…Recently, efforts have also been made to improve the testing efficiency. In [16], the authors speed up the classification efficiency given a big trained model which may contain millions of classifiers by hashing both classifiers and features to Hamming space and use the Hamming distance to approximate the original classifier. By replacing inner product on high dimensional floating point vector with compact hamming code, the classification process can be 20 to 200 times faster than using original classifiers, and the size of the classifier can also be significantly reduced.…”
Section: A Scalable Visual Recognitionmentioning
confidence: 99%
“…Recently, efforts have also been made to improve the testing efficiency. In [16], the authors speed up the classification efficiency given a big trained model which may contain millions of classifiers by hashing both classifiers and features to Hamming space and use the Hamming distance to approximate the original classifier. By replacing inner product on high dimensional floating point vector with compact hamming code, the classification process can be 20 to 200 times faster than using original classifiers, and the size of the classifier can also be significantly reduced.…”
Section: A Scalable Visual Recognitionmentioning
confidence: 99%