2002
DOI: 10.1142/9789812778086_0004
|View full text |Cite
|
Sign up to set email alerts
|

Mantra I: A Systolic Array for Neural Computation

Abstract: In the last decade, the need has arisen for medium-cost dedicated computers for artificial neural network (ANN) models. Several machines have been proposed. However, only very seldom can systems be considered as massively parallel and, hence, exploit the huge intrinsic parallelism of ANN models. The MANTRA I machine addresses this issue by targeting synapse-level parallelism on a bidimensional systolic array, based on a custom VLSI circuit called GENES IV. A prototype SIMD computer with 400 processing elements… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2011
2011
2011
2011

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 6 publications
0
2
0
Order By: Relevance
“…The analysis performed in this paper includes as a special case (for a unit epoch length), the original algorithm studied in [35]. The advantage of the batch variant is that it decorrelates the winner selection and weight update equations inside an epoch making batch processing possible which can then be exploited by massively parallel -at the synaptic level -machines such as systolic array architectures [15,39].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The analysis performed in this paper includes as a special case (for a unit epoch length), the original algorithm studied in [35]. The advantage of the batch variant is that it decorrelates the winner selection and weight update equations inside an epoch making batch processing possible which can then be exploited by massively parallel -at the synaptic level -machines such as systolic array architectures [15,39].…”
Section: Discussionmentioning
confidence: 99%
“…the T winners during the kth epoch depend not only on the inputs ξ(kT + t ) but also on the actual weight values μ i (kT + t ) and, therefore, cannot be precomputed. To be able to utilize special parallel hardware, such as the mantra machine [15,39], appropriate variants of Kohonen's algorithm that permit decoupling of Equations (3) and (5) have been proposed in [16]. The variant studied in this work is the batch version of Kohonen's algorithm [16,36] in which weight changes are accumulated throughout each epoch and the weights are modified only after the presentation of the last input ξ(kT + T − 1).…”
Section: Introductionmentioning
confidence: 99%