1990 IJCNN International Joint Conference on Neural Networks 1990
DOI: 10.1109/ijcnn.1990.137711
|View full text |Cite
|
Sign up to set email alerts
|

Systolic array implementations of neural nets on the MasPar MP-1 massively parallel processor

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
5
0

Year Published

1991
1991
2011
2011

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(5 citation statements)
references
References 7 publications
0
5
0
Order By: Relevance
“…Early study [5][6] mainly focused on the design of parallel neural network and their implementation. Reference [5] realizes a BP network based on nodes in parallel way, and reference [6] realizes a parallel algorithm of BP network, which has the characteristic of combining training set in parallel way (data in parallel way) and nodes in parallel way.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Early study [5][6] mainly focused on the design of parallel neural network and their implementation. Reference [5] realizes a BP network based on nodes in parallel way, and reference [6] realizes a parallel algorithm of BP network, which has the characteristic of combining training set in parallel way (data in parallel way) and nodes in parallel way.…”
Section: Introductionmentioning
confidence: 99%
“…Reference [5] realizes a BP network based on nodes in parallel way, and reference [6] realizes a parallel algorithm of BP network, which has the characteristic of combining training set in parallel way (data in parallel way) and nodes in parallel way. Both of the papers do not consider the problem of dynamic load balancing of communication.…”
Section: Introductionmentioning
confidence: 99%
“…Two dimensional systolic configurations to implement the backpropagation ANNs have been adopted by the researchers in the past few years [3,5,9,. These attempts exploit the inherent parallelism of ANNs taking advantage of the capability for pipelined execution of the multiple training patterns.…”
Section: Introductionmentioning
confidence: 99%
“…At Carnegie Mellon University, the back-propagation learning algorithm has been performed on a systolic array computer [11]. Kung and Hwang proposed several implementations of neural network models on a systolic array [10], which was further mapped onto a parallel SIMD machine (the MasPar Mp-1) in [2]. Steck et al also presented an implementation of the recursive least squares learning algorithm [13] on an Intel iPSC/2, a cubically-connected machine.…”
mentioning
confidence: 99%