This paper describes a gradient descent technique for training radial basis function (RBF) networks which is suitable for hardware implementation. The method dynamically adjusts the positions and the widths of the basis functions so as to reduce the total output error of the network while the output connection weights are being trained. The algorithm is demonstrated by using it to train an RBF network to perform simple logical functions.
If hardware implementations of neural networks are to be successful new models are required that are simpler to implement. As a first step in this direction a new associative memory model is proposed that is specifically designed for optical implementations. This memory is a modification of Kanerva’s sparse distributed memory,1 which eliminates the negative connection elements by employing sparsely coded representations on all layers. The advantages of sparse coding include improved storage capacity, small weight range, and a simplified learning rule. The implementation of the memory is illustrated by suggesting several optical architectures. The advantages and disadvantages of the model in each case are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.