To demonstrate the versatility of the building-block approach, two neural network applications were implemented on cascaded analog VLSI chips. Weights were implemented using 7-bit multiplying digital-to-analog converter (MDAC) synapse circuits, with 31x32 and 32x32 synapses per chip. A new learning algorithm compatible with analog VLSI was applied to the twoinput parity problem. The algwrithm combines dynamically-evolving architecture with limited gradient-descent backpropagation for efficient and versatile supervised learning. To implement the leaming algorithm in hardware, synapse circuits were paralleled for additional quantization levels. The hardware-in-the-loop learning system allocated 2-5 hidden neurons for parity problems. Also, a 7x7 assignment problem was mapped onto a cascaded 64-neuron fully-connected feedback network. In 100 randomly-selected problems, the network found optimal or good solutions in most cases, with settling times in the range of 7-100 microseconds.
Artificial neural network paradigms are derived from biological nervous system and are characterized by massive parallelism. These networks have shown the capabilities of processing input-output mapping operations even where the transformation rules are not known, partially known or ill-defined. For high-speed processing, we have fabricated neural network architectures as building-block chips with either a 32x32 matrix of synapses or a 32x31 array of synapses along with 32 neurons along a diagonal for a 32x32 matrix. Reconfigurabiity allows a variety of architectures from fully recurrent to fully feedforward, including constructive architectures such as cascade correlation. Further, a variety of gradient-descent learning algorithms have been implemented. Additionally, the chips being cascadable, larger size networks are easily assembled. An innovative scheme of combining two identical synapses on two respective chips in parallel nominally doubles the bit resolution from 7 bits (6-bit + sign) to 13 bits (12-bit + sign). We describe the feedforward net obtained by assembly of 8 chips on a board with nominally 13 bits of resolution for a hardware-in-the-loop learning of a feature classification problem involving map-data. This neural net hardware with 27 analog inputs and 7 outputs is able to learn to classify the features and provide the required output map at high speed with 89% accuracy. This result, with hardware's lower precision, etc., compares favorably with an accuracy of 92% obtained both by a neural network software simulation (floating point accuracy of synaptic weights) and a statistical technique of k-nearest neighbors. INTRODUCTIONArtificial neural networks as parallel processing paradigm are considered particularly suitable for a variety of image processing problems such as terrain and map data analysis1, shape and waveform analysis23, medical image analysis4, object discrimination and artificial vision58. Modem general-purpose computers allow simulation of almost any neural network architecture and learning algorithm, and such simulations in many cases afford the easiest and most cost-effective approach for neural network applications. However, there are major application areas that require or benefit from custom neural hardware911. Custom hardware is especially necessary in cases where required throughput is greater than can be sustained on available computers, due either to very large network size or the need for short (real-time) learning or response intervals12.One key characteristic that distinguishes different implementations is the level of parallelism. During the past ten years, the research at the Jet Propulsion Laboratory has focused mainly on VLSI implementations of neural networks using a combination of digital and analog custom-CMOS design refinements that offer parallelism and connectivity to the fullest. Reconfigurable and cascadable neural network architectures have been designed and fabricated as building block chips.As all the synapses are individually reconfigurable through softwar...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.