IEEE International Conference on Neural Networks
DOI: 10.1109/icnn.1993.298738
|View full text |Cite
|
Sign up to set email alerts
|

A neural network systems component

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 4 publications
0
3
0
Order By: Relevance
“…Each chip has 64 fixed point, RISC processors that currently operate at 20 MHz. These processors are designed to operate in an SIMD configuration where several CNAPS chips may be under the control of a single sequencer chip [4]. Each of the 64 processing nodes ('PNs) on each CNAPS chip has an adder, a multiplier, a logic unit, 4K bytes of local memory, several general purpose registers, and inter-PN bussing.…”
Section: Mavis Hardware Overviewmentioning
confidence: 99%
“…Each chip has 64 fixed point, RISC processors that currently operate at 20 MHz. These processors are designed to operate in an SIMD configuration where several CNAPS chips may be under the control of a single sequencer chip [4]. Each of the 64 processing nodes ('PNs) on each CNAPS chip has an adder, a multiplier, a logic unit, 4K bytes of local memory, several general purpose registers, and inter-PN bussing.…”
Section: Mavis Hardware Overviewmentioning
confidence: 99%
“…However, the use of neural network remains contingent on the availability of powerful hardware to provide adequate speed. Fortunately, the high density of modern technologies lets us implement a large number of identical, concurrently operating processors on one chip, thus exploiting the inherent parallelism of neural networks [6]. The regularity of neural networks and the small number of well-defined arithmetic operations used by neural algorithms greatly simplify the design and layout of VLSI circuits [7].…”
Section: Introductionmentioning
confidence: 99%
“…However, the use of neural network remains contingent on the availability of powerful hardware to provide adequate speed. Fortunately, the high density of modern technologies lets us implement a large number of identical, concurrently operating processors on one chip, thus exploiting the inherent parallelism of neural networks [4]. The regularity of neural networks and the small number of well-defined arithmetic operations used by neural algorithms greatly simplify the design and layout of 'V'LSI circuits [ 5 ] .…”
Section: Introductionmentioning
confidence: 99%