Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 2008
DOI: 10.1145/1401890.1401942
|View full text |Cite
|
Sign up to set email alerts
|

A sequential dual method for large scale multi-class linear svms

Abstract: Efficient training of direct multi-class formulations of linear Support Vector Machines is very useful in applications such as text classification with a huge number examples as well as features. This paper presents a fast dual method for this training. The main idea is to sequentially traverse through the training set and optimize the dual variables associated with one example at a time. The speed of training is enhanced further by shrinking and cooling heuristics. Experiments indicate that our method is much… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
61
0

Year Published

2010
2010
2022
2022

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 108 publications
(62 citation statements)
references
References 18 publications
1
61
0
Order By: Relevance
“…6. To emphasize the validity and efficacy of the learned network, we also compare it with another method utilizing multi-class support vector machine (SVM) with the Crammer and Singer multi-class SVM [21] implementation of [22] in LIBLINEAR [23] library. The feature vectors for training SVM are extracted from the VGG-F network of [20], their dimensionality is reduced to 256, and PCA whitening is applied.…”
Section: Superclass Classificationmentioning
confidence: 99%
“…6. To emphasize the validity and efficacy of the learned network, we also compare it with another method utilizing multi-class support vector machine (SVM) with the Crammer and Singer multi-class SVM [21] implementation of [22] in LIBLINEAR [23] library. The feature vectors for training SVM are extracted from the VGG-F network of [20], their dimensionality is reduced to 256, and PCA whitening is applied.…”
Section: Superclass Classificationmentioning
confidence: 99%
“…Here we extend the sequential dual method (SDM) presented in [8] to derive an efficient sequential derivation of the LSSVM dual variables. This algorithm uses the gradient information to optimize the dual variables and the weight vector in a sequential manner.…”
Section: B Computation Of the Lssvm Parametersmentioning
confidence: 99%
“…We refer to this new method as linear subclass SVMs (LSSVMs). For the efficient implementation of LSSVMs we exploit the sequential dual method (SDM) described in [8]. Moreover, we introduce a new nongaussianity measure for subclass partitioning, and exploit an ECOC framework for combining the binary subclass classifiers.…”
mentioning
confidence: 99%
“…LiblieaR is an R interface to LIBLINEAR, a C/C++ library for large linear classification (Fan et al, 2008). LIBLINEAR not only has good theoretical properties, but also shows promising performance in practice (Fan et al, 2008;Hsieh et al, 2008;Keerthi et al, 2008;Lin et al, 2008). L2-regularized L2-loss support vector classification model (Fan et al, 2008) in LiblieaR was applied to construct the predictor.…”
Section: Predictor Construction and Evaluationmentioning
confidence: 99%