International Neural Network Conference 1990
DOI: 10.1007/978-94-009-0643-3_38
|View full text |Cite
|
Sign up to set email alerts
|

Neural Network Simulation on the MasPar MP-1 Massively Parallel Processor

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

1992
1992
1995
1995

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 17 publications
(5 citation statements)
references
References 0 publications
0
5
0
Order By: Relevance
“…We have investigated the trade-offs of different approaches to parallelization of neural networks, as given in [16], [6], [2] and [24] and decided on implementations which combine unit parallelism with training vector parallelism (the first two implementations) resp. link weight parallelism with training vector parallelism.…”
Section: The Architecture Of the Mp-1216mentioning
confidence: 99%
“…We have investigated the trade-offs of different approaches to parallelization of neural networks, as given in [16], [6], [2] and [24] and decided on implementations which combine unit parallelism with training vector parallelism (the first two implementations) resp. link weight parallelism with training vector parallelism.…”
Section: The Architecture Of the Mp-1216mentioning
confidence: 99%
“…It is unfair to quote times for the latter because time has not permitted completion of the work, however, using only the front-end processor the conditioning time was 5.5 seconds. Properly to utilize the power of the machine for parallel computation of a large number of channels requires a complex program but previous work [3] suggests that this may result in speed improvements of the order of 20.…”
Section: Future Workmentioning
confidence: 99%
“…From this data it can be seen that it is advisable to use the local grid as much as possible since the communication bandwidth is much larger than with the router. Also on our machine we experi- Having investigated the trade-offs of different approaches to parallelization of neural networks, as given in [131, [14], [151 and [16] we decided on an implementation which combines unit parallelism with training vector parallelism. All implementations of our parallel simulator kernel were done in MPL, a parallel extension of C. Two of them have recently been converted to AMPL, the ANSI C extension of MPL.…”
Section: A Architecture Of the Mp-1mentioning
confidence: 99%