A maseively parallel, all-digital, stochastic digital architecture -TIdiANN-is described which performs competitive and Kohonen types of learning at rates as high as 145,000 training examples per second regardless of network size. Simulations of TInMANN, both with and without its conscience mechanism activated, demonstrate its effectiveness on a number of example problems.
StochasticismArtificial neural networks are massively parallel architectures that solve difficult problems via the cooperation of highly interconnected but simple computing elements. Such networks form a different paradigm of computing in the hopes of rapidly solving problems while adapting to a changing environment -qualities that are realized in von Neumnnn computers only with great difficulty. For example, vector quantization of images normally requires specialized programming to detect and encode the meaningful clusters of vectors, but a Kohonen self-organizing map can adaptively form the necessary codebook using a simple, general learning algorithm (10). The speed advantage of this and other neural network approaches, however, cannot be realized without specialized hardware.Unfortunately, the large number of processing elements (which we will refer to as neumru) and their highly interconnected nature makes the construction of neural networks challenging. A majority of implementations rely on analog electronics to provide compact neurons possessing the required computational primitives (usually the summation and nonlinear transformation of signals from other neurons in the net). Analog chips have been fabricated containing 512 neurons and a 512 x 512 fixed-resistor matrix which can be programmed to solve a specific problem very rapidly [4]. However, variable resistors (or weights) are needed for the network to learn from previous experience. Such Hopfield neural networks have been built which store their weights on lowleakage capacitors [7] or in digital storage registers [14], while a Kohonen network is under construction which also employs adjustable analog weights [8]. The increased complexity and size of these adjustable weights greatly reduces the number of neurons which can be placed on a chip. Real-world applications will require many neurons, so finding a method of interconnecting these chips to form larger networks will be of primary concern. However, the large number of analog signals which must pass between chips will exceed the available input /output resources, while the noise and parasitic capacitances on the external I/O lines will distort the operation of the network and possibly lead to erroneous results.The limitations of analog computing have led researchers of neural networks to rely upon digital simulation. Currently available high-speed ALUs excel at the inner product computations often found in neural network algorithms and have been used to construct a general-purpose neural network architecture [15] and a dedicated Kohonen network [5]. The size and 1/0 requirements of the digital logic limit the number of ALUs which...