2008 Design, Automation and Test in Europe 2008
DOI: 10.1109/date.2008.4484865
|View full text |Cite
|
Sign up to set email alerts
|

Scalable Architecture for on-Chip Neural Network Training using Swarm Intelligence

Abstract: This paper presents a novel architecture for on-chip neural network training using particle swarm optimization (PSO). PSO is an evolutionary optimization algorithm with a growing field of applications which has been recently used to train neural networks. The architecture exploits PSO algorithm to evolve network weights as well as a method called layer partitioning to implement neural networks. In the proposed method, a neural network is partitioned into groups of neurons and the groups are sequentially mapped… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
7
0
4

Year Published

2008
2008
2018
2018

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(11 citation statements)
references
References 15 publications
0
7
0
4
Order By: Relevance
“…The studies related to implementation of an ANN training and/or a trained ANN on FPGA are presented in the literature. Ferrer, Gonzalez, Fleitas, Acle and Canetti [9], Savich, Moussa and Areibi [10], Farmahini-Farahani, Fakhraie and Safari [11] have used fixed-point number format at various bit lengths. Nedjah, Silva, Mourelle and Silva [12], Ç avuşlu, Karakuzu and Şahin [13], Ç avuşlu, Karakuzu, Şahin and Yakut [14], Ç avuşlu, Karakuzu and Karakaya [15] have used floatingpoint number format at various bit-lengths.…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations
“…The studies related to implementation of an ANN training and/or a trained ANN on FPGA are presented in the literature. Ferrer, Gonzalez, Fleitas, Acle and Canetti [9], Savich, Moussa and Areibi [10], Farmahini-Farahani, Fakhraie and Safari [11] have used fixed-point number format at various bit lengths. Nedjah, Silva, Mourelle and Silva [12], Ç avuşlu, Karakuzu and Şahin [13], Ç avuşlu, Karakuzu, Şahin and Yakut [14], Ç avuşlu, Karakuzu and Karakaya [15] have used floatingpoint number format at various bit-lengths.…”
Section: Introductionmentioning
confidence: 99%
“…Nedjah, Silva, Mourelle and Silva [12], Won [16], Ç avuşlu, Karakuzu and Şahin [13], Ç avuşlu, Karakuzu, Şahin and Yakut [14], Ç avuşlu, Karakuzu and Karakaya [15], Savich, Moussa and Areibi [10] have used logarithmic sigmoidal approaches as activation function. Farmahini-Farahani, Fakhraie and Safari [11], Ferrer, Gonzalez, Fleitas, Acle and Canetti [9], Farmahini-Farahani, Fakhraie and Safari [11], Ç avuşlu, Karakuzu and Karakaya [15] have used tangent hyperbolic activation function approaches as activation function.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…al. 22 implemented ANN training based on FPGA using the PSO algorithm. In the application fixed point number ws-jcsc format is used as the number format and the look-up table is used for the tangent hyperbolic activation function.…”
Section: Introductionmentioning
confidence: 99%