A back propagation network model employing Threshold Logic Transform (TLT) for faster training is proposed. TL.T, a simple mathematical transformation, is inserted between the input layer and the hidden layer to seed up extraction of complex features of input. When comparing the conventional method versus adding TLT the study of three classification tasks revealed that with TLT the convergence speed is 5 to 20 times faster. The rate of convergence is also much greater. The conventional method is 33.3%, whereas, with TLT it is 99.3%. Furthermore, analyses of the weighted sum of inputs reveal that the hidden units of the proposed networks effectively extract the global features of input.
Iiitroduc tionPreprocessing is an effective method to increase the speed of convergence and improve the rate of convergence of back propagation (BP) networks. Preprocessing is generally categorized into two approaches, 1. problem specific feature extraction which is the FFT-cepstrum-feature extraction of an input speech signal 2, and 2. a network proposed by Namatame et. a1 whereas input data transformation is successfully applied to 2-D object recognition3). Namatame's model is called the Chebychev network. It uses special units of which the transfer functions are Chebychev polynomials that perform the input data transformation. However, Chebychev polynomials, which use cosine and arc cosine, are computationally expensive and it is also difficult to identify the roles of the hidden units. The proposed BP networks employ Threshold Logic Transform (TLT), a simple mathematical transformation, inserted between the input layer and the hidden layer to seed up extraction of complex features of input. The proposed model employing TLT (TLT network), performs better than the conventional BP network in terms of the speed of convergence and the rate of convergence. In Section 2 the details of TLT networks are described. In Section 3 simulation results on three classification tasks are shown, followed by the discussion in Section 4 and the conclusion in Section 5.
TLT networksIn Fig.1 the topology of the TLT networks is illustrated. Like the Chebychev network, the TLT network has an additional layer of units, which we call the "TLT layer". It is inserted between the input layer and the first hidden layer for faster training. Without the TLT layer the network is equivalent to a multilayer BP feedforward neural network. All the hidden and output units have iigmoidal nonlinearities as transfer functions as well as bias terms. Biases and weights between the TLT layer and the output layer are adjusted using the back propagation algorithm'). Each input unit is connected to a disjoint set of N units in the TLT layer with all weights +l. Each unit in the set has a transfer function Ti (i = 0, 1, 2,. .., N -1):
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.