Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 2006
DOI: 10.1145/1150402.1150429
|View full text |Cite
|
Sign up to set email alerts
|

Training linear SVMs in linear time

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

6
1,136
0
8

Year Published

2010
2010
2023
2023

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 1,456 publications
(1,150 citation statements)
references
References 21 publications
6
1,136
0
8
Order By: Relevance
“…The implementation of H-M 3 and HMSVM were provided by the authors while the implementation of the baseline algorithms (SVM and H-SVM ) and the versions in which binary classifiers are not independent (SVM l ∆ and H-SVM l ∆ ) were done modifying slightly Joachims' SVM perf [20]; this SVM implementation provided us with an excellent base due to its linear complexity. In all cases, when it was needed, we set the regularization parameter C = 1 and we used a linear kernel.…”
Section: Resultsmentioning
confidence: 99%
“…The implementation of H-M 3 and HMSVM were provided by the authors while the implementation of the baseline algorithms (SVM and H-SVM ) and the versions in which binary classifiers are not independent (SVM l ∆ and H-SVM l ∆ ) were done modifying slightly Joachims' SVM perf [20]; this SVM implementation provided us with an excellent base due to its linear complexity. In all cases, when it was needed, we set the regularization parameter C = 1 and we used a linear kernel.…”
Section: Resultsmentioning
confidence: 99%
“…Compute (k) by formula (11). if (k) = 0 then Sort the indices set {1, · · · , l} by the values of (k) in increasing order.…”
Section: Algorithm 2 the First-order Working Set Selection Rulementioning
confidence: 99%
“…Several researchers also explore how to train the primal form of (4) and the extended models fast. The existing algorithms can be broadly categorized into two categories: the cutting-plane methods [11,5,12,13,25], and subgradient methods [3,17]. For example, in [17], Shalev-Shwartz et al described and analyzed a simple and effective stochastic sub-gradient descent algorithm and prove that the number of iterations required to obtain a solution of accuracy is O(1/ ).…”
Section: Introductionmentioning
confidence: 99%
“…SMO, proposed by Platt [7], is another algorithm that trains SVM efficiently which decomposes the overall QP problem into sub-problems and chooses to solve the smallest possible optimization problem at every step. Joachims [8] presented a Cutting Plane algorithm which is used to train Linear SVM in a linear time for classification task. In [9] a fast algorithm for solving linear SVMs with loss function toward data mining tasks for large data sets was developed.…”
Section: Related Workmentioning
confidence: 99%