2010
DOI: 10.1007/978-3-642-16687-7_16
|View full text |Cite
|
Sign up to set email alerts
|

A New Algorithm for Training SVMs Using Approximate Minimal Enclosing Balls

Abstract: Abstract.It has been shown that many kernel methods can be equivalently formulated as minimal enclosing ball (MEB) problems in a certain feature space. Exploiting this reduction, efficient algorithms to scale up Support Vector Machines (SVMs) and other kernel methods have been introduced under the name of Core Vector Machines (CVMs). In this paper, we study a new algorithm to train SVMs based on an instance of the Frank-Wolfe optimization method recently proposed to approximate the solution of the MEB problem.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
14
0

Year Published

2011
2011
2016
2016

Publication Types

Select...
2
2
1

Relationship

4
1

Authors

Journals

citations
Cited by 5 publications
(16 citation statements)
references
References 8 publications
2
14
0
Order By: Relevance
“…However, we conclude that MFW is more accurate than FW. This last observation stresses the relevance of this work as an extension of the results presented in [10].…”
Section: Statistical Testssupporting
confidence: 83%
See 1 more Smart Citation
“…However, we conclude that MFW is more accurate than FW. This last observation stresses the relevance of this work as an extension of the results presented in [10].…”
Section: Statistical Testssupporting
confidence: 83%
“…5.3 and 5.4. In Subsection 5.5 we present additional experiments on the set of problems studied in [10]. The statistical significance of the results presented so far is analyzed in section 5.6.…”
Section: Organization Of This Sectionmentioning
confidence: 99%
“…In addition, several variants of the basic procedure have been analyzed, which can improve the convergence rate and practical performance of the basic FW iteration [15,35,26,6]. From a practical point of view, they have emerged as efficient alternatives to traditional methods in several contexts, such as large-scale SVM classification [7,8,35,6] and nuclear norm-regularized matrix recovery [22,42]. In view of these developements, FW algorithms have come to be regarded as a suitable approach to large-scale optimization in various areas of Machine Learning, statistics, bioinformatics and related fields [1,27].…”
Section: Frank-wolfe Optimizationmentioning
confidence: 99%
“…In Algorithm 2, we compute the coordinates of the gradient using the method of residuals given by equation (7). Due to the randomization, this method becomes very advantageous with respect to the use of the alternative method based on the active covariates, even for very large p. Indeed, if we denote by s the cost of performing a dot product between a predictor z i and another vector in R m , the overall cost of picking out the FW vertex in step 1 of our algorithm is O(s|S|).…”
Section: Complexity and Implementation Detailsmentioning
confidence: 99%
“…The solution of the linear approximation step can be obtained in a fast way by solving a largest eigenvalue problem, as opposed to proximal methods that require a full SVD of the gradient matrix at each iteration, which is prohibitive for large-scale problems. As a motivating example, we consider the SVM problem (5) for the experiments in this paper, not only because of its significance, but also to allow for a comparison with the results obtained in previous research efforts [19,20,11,7,21].…”
Section: Applications To Machine Learningmentioning
confidence: 99%