2009
DOI: 10.1007/978-3-642-02568-6_60
|View full text |Cite
|
Sign up to set email alerts
|

High Speed k-Winner-Take-ALL Competitive Learning in Reconfigurable Hardware

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
16
0

Year Published

2010
2010
2012
2012

Publication Types

Select...
2
1
1

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(16 citation statements)
references
References 4 publications
0
16
0
Order By: Relevance
“…Moreover, the architecture uses the subspace search and bitplane reduction techniques to further reduce the area costs. It can then be observed Table 3 The area costs of the proposed architecture and the architecture in [24] for different numbers of neurons N. from Table 3 that the architecture in [24] has a lower consumption of LEs and embedded multipliers when the number of neurons N is large. Consequently, the SOPC based on the architecture in [24] also consumes lower hardware resources, as shown in Table 4.…”
Section: Imagesmentioning
confidence: 97%
See 4 more Smart Citations
“…Moreover, the architecture uses the subspace search and bitplane reduction techniques to further reduce the area costs. It can then be observed Table 3 The area costs of the proposed architecture and the architecture in [24] for different numbers of neurons N. from Table 3 that the architecture in [24] has a lower consumption of LEs and embedded multipliers when the number of neurons N is large. Consequently, the SOPC based on the architecture in [24] also consumes lower hardware resources, as shown in Table 4.…”
Section: Imagesmentioning
confidence: 97%
“…The architecture presented in [24] is a hardware implementation of PDS in wavelet domain. Because of the employment of PDS, there is only one squared distance calculation unit in the circuit.…”
Section: Imagesmentioning
confidence: 99%
See 3 more Smart Citations