2017
DOI: 10.1016/j.neucom.2016.09.118
|View full text |Cite
|
Sign up to set email alerts
|

Hardware architecture for large parallel array of Random Feature Extractors applied to image recognition

Abstract: We demonstrate a low-power and compact hardware implementation of Random Feature Extractor (RFE) core. With complex tasks like Image Recognition requiring a large set of features, we show how weight reuse technique can allow to virtually expand the random features available from RFE core. Further, we show how to avoid computation cost wasted for propagating "incognizant" or redundant random features. For proof of concept, we validated our approach by using our RFE core as the first stage of Extreme Learning Ma… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 20 publications
(13 citation statements)
references
References 32 publications
0
12
0
Order By: Relevance
“…The related design approaches typically aimed efficient digital implementations of RBNs in configurable architectures. A very largescale integration (VLSI) architecture was the main target in [3,6]; the approach presented in [32] envisioned analog implementations and combined a tri-state activation function with an offline pruning procedure to limit the predictor complexity. The models proposed in [9,10,37,50] targeted FPGA implementations of the learning phase, either online or in batch mode.…”
Section: Comparison With Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The related design approaches typically aimed efficient digital implementations of RBNs in configurable architectures. A very largescale integration (VLSI) architecture was the main target in [3,6]; the approach presented in [32] envisioned analog implementations and combined a tri-state activation function with an offline pruning procedure to limit the predictor complexity. The models proposed in [9,10,37,50] targeted FPGA implementations of the learning phase, either online or in batch mode.…”
Section: Comparison With Related Workmentioning
confidence: 99%
“…The models proposed in [9,10,37,50] targeted FPGA implementations of the learning phase, either online or in batch mode. Conversely, Decherchi et al [7] and Ragusa et al [35] proposed a minimal implementation of the forward phase of RBNs, while [32,49] introduced an effective scheme to reduce the memory requirements of the eventual predictors.…”
Section: Comparison With Related Workmentioning
confidence: 99%
“…In [1], [18] the target was the implementation of the predictor on resource-constrained device. Hence, the underlying mapping strategy was designed to fulfill specific constraints on the admissible activation function, i.e., respectively, hardlimiter function and tristate function.…”
Section: Related Workmentioning
confidence: 99%
“…Hence, it is best if healthy models are learnt online for each machine separately in an online manner on the deployed hardware. The choice of ELM as the classifier is motivated by both its fast convergence as well as availability of power efficient hardware [38,39] for deployment.…”
Section: Stack Of Artificial Neural Networkmentioning
confidence: 99%