This paper demonstrates how the backpropagation algorithm (BP) and its variants can be accelerated significantly while the quality of the trained nets will increase. Two modifications were proposed: First, instead of the usual quadratic error we use the cross entropy as an error function and second, we normalize the input patterns. The first modification eliminates the so called sigmoid prime factor of the update rule for the output units. In order to balance the dynamic range of the inputs we use normalization. The combination of both modifications is called CEN–Optimization (Cross Entropy combined with Pattern Normalization). As our simulation results show CEN–Optimization can't only improve online BP but also RPPROP, the most sophisticated BP variant known today. Even though RPROP yields usually much better results than online BP the performance gap between CEN–BP and CEN–RPROP is smaller than between the standard versions of those algorithms. By means of CEN–RPROP it is nearly guaranteed to achieve an error of zero (with respect to the training set). Simultaneously, the generalization performance of the trained nets can be increased, because less complex networks suffice to fit the training set. Compared to the usual SSE (summed squared error) one can yield lower training errors with fewer weights.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.