TENCON 2017 - 2017 IEEE Region 10 Conference 2017
DOI: 10.1109/tencon.2017.8228163
|View full text |Cite
|
Sign up to set email alerts
|

FPGA implementation of extreme learning machine system for classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 17 publications
(7 citation statements)
references
References 11 publications
0
7
0
Order By: Relevance
“…FPGA implementations are recent entrant into this area due to its reconfigurability. In [17], [18], they propose an efficient decomposition method to accelerate the computation of the pseudo-inverse for the hidden layer output matrix. In [19], the properties of random networks and hard-limiter activation functions are exploited to implement ELM on FPGA.…”
Section: Related Work For Elm Hardware Implementationmentioning
confidence: 99%
“…FPGA implementations are recent entrant into this area due to its reconfigurability. In [17], [18], they propose an efficient decomposition method to accelerate the computation of the pseudo-inverse for the hidden layer output matrix. In [19], the properties of random networks and hard-limiter activation functions are exploited to implement ELM on FPGA.…”
Section: Related Work For Elm Hardware Implementationmentioning
confidence: 99%
“…Several papers on hardware implementation of ELM [28,29,30,31] have been reported since 2012. However, studies on that of OS-ELM have just started to be reported in the past few years.…”
Section: Hardware Implementation Of Os-elmmentioning
confidence: 99%
“…A very largescale integration (VLSI) architecture was the main target in [3,6]; the approach presented in [32] envisioned analog implementations and combined a tri-state activation function with an offline pruning procedure to limit the predictor complexity. The models proposed in [9,10,37,50] targeted FPGA implementations of the learning phase, either online or in batch mode. Conversely, Decherchi et al [7] and Ragusa et al [35] proposed a minimal implementation of the forward phase of RBNs, while [32,49] introduced an effective scheme to reduce the memory requirements of the eventual predictors.…”
Section: Comparison With Related Workmentioning
confidence: 99%
“…First, the proposed designs mostly rely on reconfigurable platforms such as field programmable gate arrays (FPGAs) [9,10,37,50], which may prove quite expensive. By contrast, implementations on micro-controllers or microcomputers have drawn limited attention, in spite of the fact that these devices best fit IoT applications and remarkably shrink the time-to-market of commercial products [1,17].…”
Section: Introductionmentioning
confidence: 99%