2010
DOI: 10.1007/978-3-642-15992-3_30
|View full text |Cite
|
Sign up to set email alerts
|

On the Design of a Hardware-Software Architecture for Acceleration of SVM’s Training Phase

Abstract: Abstract. Support Vector Machines (SVM) is a new family of MachineLearning techniques that have been used in many areas showing remarkable results. Since training SVM scales quadratically (or worse) according of data size, it is worth to explore novel implementation approaches to speed up the execution of this type of algorithms. In this paper, a hardware-software architecture to accelerate the SVM training phase is proposed. The algorithm selected to implement the architecture is the Sequential Minimal Optimi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2017
2017
2019
2019

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 9 publications
0
5
0
Order By: Relevance
“…Besides SMO, there are also Gilbert's algorithm [24] and Least Squared Support Vector Machine (LS-SVM) [25] for training. Works that are targeted for training either implement the whole system-on-chip or offloaded to a co-processor [2,14,15,16] to accelerate the training process. The task that is targeted for hardware implementation is the kernel as it is a compute-intensive task that benefits from parallelization.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Besides SMO, there are also Gilbert's algorithm [24] and Least Squared Support Vector Machine (LS-SVM) [25] for training. Works that are targeted for training either implement the whole system-on-chip or offloaded to a co-processor [2,14,15,16] to accelerate the training process. The task that is targeted for hardware implementation is the kernel as it is a compute-intensive task that benefits from parallelization.…”
Section: Related Workmentioning
confidence: 99%
“…These designs cannot be easily reused for other applications as they are optimized for specific applications [8]. Works that targeted on acceleration of SVM with a co-processor unit [2,14,15,16] tend to focus on the kernel due to its compute-intensive task and also its innate nature to be parallelized.…”
Section: Introductionmentioning
confidence: 99%
“…Martinez, et al [17] designed a heterogeneous architecture to accelerate SVM training phase. To reduce the dot-product computation time, these operations were affected by the hardware coprocessor of Xtreme DSP Virtex-IV whereas the hierarchy of SMO algorithm was implemented in GPP.…”
Section: A Fpga: Hardware Acceleratormentioning
confidence: 99%
“…Besides SMO, there are also Gilbert's algorithm [20] and LS-SVM [21] for training. Works that are targeted for training either implements the whole system onchip or offloading to a co-processor [1], [10]- [12] to accelerate the training process. The task that is targeted for hardware implementation is the kernel as it is compute-intensive task that benefit from parallelization.…”
Section: Kernel Functionmentioning
confidence: 99%
“…Works that targeted on accelerating SVM with a coprocessor unit [1], [10]- [12] tend to focus on the kernel due to its compute-intensive task and also its innate nature to be parallelize. Prior work by Kane et al [13] implemented a generic SVM classification architecture that was tested with a wide variety of datasets.…”
Section: Introductionmentioning
confidence: 99%