Adaptive networks can be easily trained to associate arbitrary input and output patterns. When subgroups of patterns (lists) are presented sequentially, however, a network tends to "unlearn" previously acquired associations while learning new associations. A second form of sequential learning problem is reported in this paper. Learning of each successive list of pattern pairs becomes progressively more difficult. Evidence for this cumulative negative transfer was obtained from simulations using backpropagation of errors to train multilayer networks. The cause of the problem appears to be the development of extreme weights during learning of new lists. Unbounded weights may be a liability for the backpropagation algorithm.
AbstractNeural network (NN) classifiers have been applied to numerous practical problems of interest. A very common type of i W classifier is the multi-layer perceptron, trained with back propagation. Although this learning procedm has been used successfully in many applications, it has several drawbacks including susceptibility to local minima and excessive convergence times.This paper presents two alternatives to back propagation for synthesizing NN classifiers. Both procedures generate appropriate network structures and weights in a fast and efficient manner without any gradient descent. The resulting decision rules are optimal under certain conditions; the weights obtained via these procedures can be used "as is" or as a starting point for back propagation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.