2019
DOI: 10.48550/arxiv.1907.09916
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Hardware-Efficient ADMM-Based SVM Training Algorithm for Edge Computing

Shuo-An Huang,
Chia-Hsiang Yang

Abstract: This work demonstrates a hardware-efficient support vector machine (SVM) training algorithm via the alternative direction method of multipliers (ADMM) optimizer. Low-rank approximation is exploited to reduce the dimension of the kernel matrix by employing the Nyström method. Verified in four datasets, the proposed ADMM-based training algorithm with rank approximation reduces 32× of matrix dimension with only 2% drop in inference accuracy. Compared to the conventional sequential minimal optimization (SMO) algor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 18 publications
0
2
0
Order By: Relevance
“…Indeed, in this framework, approximating the kernel matrix with an HSS structure (h fixed) results in a highly efficient optimization phase for a fixed value of C (see Section 3.3). It is important to note, moreover, that the computational footprint related to the kernel matrix approximation phase is fully justified by the fact that the same approximation can be reused for training the model with different values of C; this feature makes our proposal particularly attractive when a fine grid is used for the tuning of the penalization parameter C. It is important to note, at this stage, that also the works [22,43] analyse the use of ADMM for SVMs: in [43] ADMM has been used to solve linear SVMs with feature selection whereas in [22] a hardware-efficient nonlinear SVM training algorithm has been presented in which the Nyström approximation is exploited to reduce the dimension of the kernel matrices. Both works represent and use, somehow, different frameworks and techniques from those presented here.…”
Section: Contributionmentioning
confidence: 99%
“…Indeed, in this framework, approximating the kernel matrix with an HSS structure (h fixed) results in a highly efficient optimization phase for a fixed value of C (see Section 3.3). It is important to note, moreover, that the computational footprint related to the kernel matrix approximation phase is fully justified by the fact that the same approximation can be reused for training the model with different values of C; this feature makes our proposal particularly attractive when a fine grid is used for the tuning of the penalization parameter C. It is important to note, at this stage, that also the works [22,43] analyse the use of ADMM for SVMs: in [43] ADMM has been used to solve linear SVMs with feature selection whereas in [22] a hardware-efficient nonlinear SVM training algorithm has been presented in which the Nyström approximation is exploited to reduce the dimension of the kernel matrices. Both works represent and use, somehow, different frameworks and techniques from those presented here.…”
Section: Contributionmentioning
confidence: 99%
“…In [40], the authors proposed an alternative direction method of multipliers (ADMM) optimizer in order to reduce the dimensions of the Kernel matrix using the Nystrom technique. This ADMM algorithm reduces the dimensions by 32 times with a 2% decrease in accuracy.…”
mentioning
confidence: 99%