Abstroct-Inthis paper, we present a fast training algorithm with which one can sequentially determine t h e needed hidden nodes and t h e values of t h e aseociated weights for classification and pattern recognition. This new approach addresses problems in backpropagation and other gradient descent training algorithms. These problems include long training times, and t h e determination of t h e proper number of hidden nodes. We mitigate these difficulties by sequentially extracting important attributes of t h e training d a t a in training each hidden node.T h e proposed algorithm separates t h e network training into t h e training of each layer of t h e network. The input layer is designed t o partition the input data space using linear discriminant functions. The training starts from one hidden node. By applying a linear discriminant algorithm, a separable subset of t h e data is deleted from t h e training set. The remaining data is carried over t o t h e training of t h e next hidden nodes.
A hidden node is added t o t h e network when it isneeded because t h e classification performance on t h e training set is not yet good enough. Thus t h e training data set is reduced sequentially while t h e training is in progress. Each node of t h e output layer performs a logic function of t h e binary outputs of t h e hidden nodes. T h e training algorithm for t h e output layer is same as Boolean minimization.