Foundations of Large-Scale Multimedia Information Management and Retrieval 2011
DOI: 10.1007/978-3-642-20429-6_10
|View full text |Cite
|
Sign up to set email alerts
|

PSVM: Parallelizing Support Vector Machines on Distributed Computers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
87
0

Year Published

2011
2011
2021
2021

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 128 publications
(87 citation statements)
references
References 9 publications
0
87
0
Order By: Relevance
“…First of all, the number of training examples was large enough to create a problem for our computational resources. The scaling of SVM to large data sets is indeed an active research area [2,7,18,19]. We turned our attention to a simple approach proposed by V.Vapnik et al in [11], called Cascade SVM.…”
Section: Tablementioning
confidence: 99%
“…First of all, the number of training examples was large enough to create a problem for our computational resources. The scaling of SVM to large data sets is indeed an active research area [2,7,18,19]. We turned our attention to a simple approach proposed by V.Vapnik et al in [11], called Cascade SVM.…”
Section: Tablementioning
confidence: 99%
“…Chang et al [23] improved the scalability of SVMs using a parallel SVM algorithm (PSVM), which reduces memory use by performing a row-based, approximate matrix factorization that loads only essential data to each machine to perform parallel computation. PSVM reduces the memory requirement from O(n 2 ) to O(np/m) and improves the computation time to O(np 2 /m), where n denote the number of training instances, p the reduced matrix dimension after factorization ( p is significantly smaller than n), and m the number of machines.…”
Section: Parallelizationmentioning
confidence: 99%
“…Very high efficiency and competitive accuracy have been achieved by the parallel implementation. Psvm proposed in [6] is based on interior point solver. It approximates the kernel matrix by incomplete Cholesky factorization.…”
Section: Related Workmentioning
confidence: 99%
“…The training of SVM is essentially a quadratic optimization problem which is both time and memory costly while running on computers, making it a challenge to apply SVM on large scale problems. Several optimizing or heuristic methods have been proposed to accelerate the training and reduce the memory occupation, such as shrinking, chunking [4], kernel caching [5], approximation of kernel matrix [6]. In addition, certain scalable solvers can be used such as Sequential Minimal Optimization (SMO) [7], mixture SVMs [8], primal estimated sub-gradient solver [9].…”
mentioning
confidence: 99%