This paper examines the chaotic behavior of Back Propagation neural networks during the training phase. The networks are trained using ordinary parameter values, while two different cases are considered. In the first one, the network does not meet desirable convergence within a pre-specified number of epochs. Chaotic behavior of this network is depicted by examining the values of the dominant Lyapunov exponents of the weight data series produced by additional training. For each training epoch, the data series representing input patterns producing the minimum absolute error in output during additional training, is also subjected to Lyapunov exponent investigation. The task of this investigation is to determine whether the network exhibits chaotic pattern competition of the best learned inputs. In the second case, the network is improved and desirable convergence is accomplished. Again, investigation focuses on the series of values representing input patterns that produce outputs with minimum absolute error. The results obtained from dominant Lyapunov exponent estimations show that chaotic pattern competition is still present, despite the fact that the network practically satisfies stability demands within predetermined accuracy limits. The best estimation series consist of the output values corresponding to the best learned input patterns. These series are examined using the theoretical tool of topological conjugacy, in addition to numerical verification of the results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.