Investigation of adaptive control systems using neuronlike networks for the optimization of multitasking control of an unknown object has revealed that the identification of the unknown object should precede the main adaptation process. The Adaptive Neuronlike Network (ANN) is used for the simulation of an "inverted object model". In the result of the identification procedure a joined block composed of the unknown object and the ANN may be described by a matrix close to the identity matrix. This procedure considerably simplifies the optimization of multitasking control.A new model of neuronlike element with nonlinear presynaptic inhibition was introduced.Applying this model and a modified learning process makes it possible to simulate a broad class of nonlinear multidimensional objects. AbstractA new learning algorithm for multi-layered neural networks is presented. This algorithm, called minimal disturbance backpropagation, approximates a least mean squared error minimization of the error function while minimally disturbing the connection weights in the network. This means that the information previously trained into the network is disturbed to the smallest amount possible while achieving the desired error correction. Simulation results indicate that this algorithm is more robust and yields much faster convergence rates than the standard back-propagation algorithm. AbstractThis report presents a back-propagation algorithm that varies the number of hidden units. This algorithm i s expected t o escape l o c a l minima and.makes i t no longer necessary t o decide the number of hidden units. We e x p l a i n e x c l u s i v e OR t r a i n i n g and 8 x 8 d o t alphanumeric font training using t h i s algorithm. In exclusive OR t r a i n i n g , the p r o b a b i l i t y of being trapped i n local minima i s reduced. In alphanumeric font t r a i n i n g , the network converged two t o three times faster than the conventional back propagation algorithm. Abstract THE EFFECTS OF PRECISION CONSTRAINTS INA BACK-PROPAGATION LEARNING NETWORK Primacy and recency effects are analyzed mathematically for back propagation algorithms (generalized delta rule), which use momentum. Our results show that when the conventional momentum parameter is used, a primacy effect occurs: The current values of the weights are biased towards the first presentations in a sequence of training patterns. To produce a recency effect, we introduce a different momentum parameter. The current values of the weights depend more on recent presentations of training patterns under this recency effect. A method is provided for selecting a momentum parameter based on the effect desired : primacy or recency.ABSTRACT This paper presents a study of precision constraints imposed by a hybrid chip architecture with analog neurons and digital back-propagation calculations. Conversions between the analog and digital domains and weight storage restrictions impose precision limits on both analog and digital calculations. It is shown through simulations that a learning s...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.