It is well known that standard single-hidden layer feedforward networks (SLFNs) with at most N hidden neurons (including biases) can learn N distinct samples (x(i),t(i)) with zero error, and the weights connecting the input neurons and the hidden neurons can be chosen "almost" arbitrarily. However, these results have been obtained for the case when the activation function for the hidden neurons is the signum function. This paper rigorously proves that standard single-hidden layer feedforward networks (SLFNs) with at most N hidden neurons and with any bounded nonlinear activation function which has a limit at one infinity can learn N distinct samples (x(i),t(i)) with zero error. The previous method of arbitrarily choosing weights is not feasible for any SLFN. The proof of our result is constructive and thus gives a method to directly find the weights of the standard SLFNs with any such bounded nonlinear activation function as opposed to iterative training algorithms in the literature.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.