2017
DOI: 10.1007/978-3-319-52277-7_26
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Sparse Approximation of Support Vector Machines Solving a Kernel Lasso

Abstract: Performing predictions using a non-linear support vector machine (SVM) can be too expensive in some large-scale scenarios. In the non-linear case, the complexity of storing and using the classifier is determined by the number of support vectors, which is often a significant fraction of the training data. This is a major limitation in applications where the model needs to be evaluated many times to accomplish a task, such as those arising in computer vision and web search ranking.We propose an efficient algorit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 10 publications
0
1
0
Order By: Relevance
“…The quantity of SVs not only determines the memory space of the learned predictor, but also the computational cost of using it. A vast body of literatures have devoted to the task to improve the sparse performance of SVM, such as [44], [45], [46], [47], [48]. Luckily, the binary classifier SVM with 0-1 loss has the innate ability to reduce the sample candidates to be SVs because for the geometrical and numerical evidence shown in this article.…”
Section: Example 42 (Real Data With Outliers)mentioning
confidence: 99%
“…The quantity of SVs not only determines the memory space of the learned predictor, but also the computational cost of using it. A vast body of literatures have devoted to the task to improve the sparse performance of SVM, such as [44], [45], [46], [47], [48]. Luckily, the binary classifier SVM with 0-1 loss has the innate ability to reduce the sample candidates to be SVs because for the geometrical and numerical evidence shown in this article.…”
Section: Example 42 (Real Data With Outliers)mentioning
confidence: 99%