configurations are tried, and if they do not yield an acceptable solution, they are discarded. Another topology is then defined and the whole training process is repeated. As a result, the possible benefits of training the original network architecture are lost and the computational costs of retraining become prohibitive. Another approach involves using a larger than needed topology and training it until a convergent solution is found. After that, the weights of the network are pruned off, if their values are negligible and have no influence on the performance of the network [7]. Since the pruning approach starts with a large network, the training time is larger than necessary and the method is computationally inefficient. It may also get trapped in one of the intermediately sized solutions because of the shape of the error surface and hence never finds the smallest network solution. Additionally, the relative importance of the nodes and weights depend on the particular mapping problem which the network is attempting to approximate and the pruning method makes it difficult to come up with a general cost function that would yield small networks for an arbitrary mapping. In the procedure suggested in [8], the error curve is monitored during the training process and a node is created when the ratio of the drop in the mean squared error (MSE) over a fixed number of trials falls below a priori chosen threshold slope. This procedure then uses the conventional, LMS-type, back-propagation algorithm to train the new architecture.In this paper a new recursive procedure for node creation in multilayer back-propagation neural networks is introduced. The derivations of the methodology are based upon the application of the Orthogonal Projection Theorem [12]. Simulation results on various examples are presented which indicate the effectiveness of the node creation scheme developed in this paper when used in conjunction with the RLS based learning method. II. TRAINING PROCESS OF MULTILAYER NEURAL NETWORKIn this section the problem of weight updating in multilayer neural networks is formulated in the context of the geometric orthogonal projection [11], [12]. The sum of the squared error is viewed as the squared length (or norm) of an error vector which is minimized using the geometric approach. It will be shown that the solution of the time updating leads to the RLS adaptation [9], [10], and the solution to the order updating allows us to recursively add nodes to the hidden layers during the training process.Consider an M-layer network as shown in Fig. 1 Abstract-This paper presents the derivations of a novel approach for simultaneous recursive weight adaptation and node creation in multilayer back-propagation neural networks. The method uses time and order update formulations in the orthogonal projection method to derive a recursive weight updating procedure for the training process of the neural network and a recursive node creation algorithm for weight adjustment of a layer with added nodes during the training process. The pr...
Abstract-This paper introduces a new connectionist network for certain domain-specific text-retrieval and search applications with expert end users. A new model reference adaptive system is proposed that involves three learning phases. Initial model-reference learning is first performed based upon an ensemble set of input-output of an initial reference model. Model-reference following is needed in dynamic environments where documents are added, deleted, or updated. Relevance feedback learning from multiple expert users then optimally maps the original query using either a score-based or a click-through selection process. The learning can be implemented, in regression or classification modes, using a three-layer network. The first layer is an adaptable layer that performs mapping from query domain to document space. The second and third layers perform document-to-term mapping, search/retrieval, and scoring tasks. The learning algorithms are thoroughly tested on a domain-specific text database that encompasses a wide range of Hewlett Packard (HP) products and for a large number of most commonly used single-and multiterm queries.
This paper presents a novel approach for simultaneous recursive weight adaptation and node creation in multi-layer Perceptron neural networks. The method uses time and order update formulations in the orthogonal projection method to anive at a recursive weight updating procedure for training process of the neural network and a recursive node creation algorithm for weight adjustment of a layer with added nodes during the training process. The proposed approach allows optimal dynamic node creation in the Sense that the mean-squared error is minimized for each new topology. The effectiveness of the algorithm is demonstrated on a real world application for detecting and classifying underground dielectric anomalies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.