The 2006 IEEE International Joint Conference on Neural Network Proceedings 2006
DOI: 10.1109/ijcnn.2006.246740
|View full text |Cite
|
Sign up to set email alerts
|

FPGA Implementation of Support Vector Machines with Pseudo-Logarithmic Number Representation

Abstract: Computations in Support Vector Machines (SVM)involve a large number of vector multiplications. When implementing such architectures on a stand alone, embedded system, the complexity of the hardware implementation of the multipliers can be a limiting factor. This paper proposes a representation of numerical data to be processed by an approximation of the logarithm of the number, thus allowing the substitution of expensive multipliers with simpler adders. Additional circuitry is proposed to translate between sta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2009
2009
2017
2017

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 11 publications
0
4
0
Order By: Relevance
“…However, low resource consuming implementations of CORDIC algorithms have increased latency [10]. Other works [26], [27] propose that computations are done in the logarithmic number system, where multiplications are replaced with additions, in order to reduce the required processing resources. However, they only consider a single processing module, hence, when adopting a more parallel architecture, to facilitate real-time operation, the additional cost from converting between the decimal number system to the logarithmic one and back again for all inputs increases.…”
Section: Related Workmentioning
confidence: 99%
“…However, low resource consuming implementations of CORDIC algorithms have increased latency [10]. Other works [26], [27] propose that computations are done in the logarithmic number system, where multiplications are replaced with additions, in order to reduce the required processing resources. However, they only consider a single processing module, hence, when adopting a more parallel architecture, to facilitate real-time operation, the additional cost from converting between the decimal number system to the logarithmic one and back again for all inputs increases.…”
Section: Related Workmentioning
confidence: 99%
“…However, they only consider a single processing module, hence, when adopting a more parallel architecture, to facilitate real-time operation, the additional cost from converting between the decimal number system to the logarithmic one and back again for all inputs increases. Alternatively, a pseudo-logarithmic number system was proposed in [35], however, the overhead for converting between number systems, in order to perform additions, remains. The works in [36], [37], [38] have looked at how the bitwidth precision impacts the classification error, in an effort to find the best trade-off between hardware resources, performance and classification speed.…”
Section: Related Workmentioning
confidence: 99%
“…The learning processing of the basic LVQ1 in the following equations (4,5) consists of modifying the neuron FVs and adjusting the class boundaries. Suppose x(t) and w c (t) represent sequences of the input vectors and a winner neuron FV in the discrete-time domain, respectively.…”
Section: Lvq Soc For Learning and Recognitionmentioning
confidence: 99%
“…Artificial neural networks and fuzzy systems are the most popular learning algorithms in hardware implementations [1], [2], [3], [4]. A hardware-friendly learning algorithm is the common requirement of all hardware implementations.…”
Section: Introductionmentioning
confidence: 99%