This paper presents a new learning theory (a set of principles for brain-like learning) and a corresponding algorithm for the neural-network field. The learning theory defines computational characteristics that are much more brain-like than that of classical connectionist learning. Robust and reliable learning algorithms would result if these learning principles are followed rigorously when developing neural-network algorithms. This paper also presents a new algorithm for generating radial basis function (RBF) nets for function approximation. The design of the algorithm is based on the proposed set of learning principles. The net generated by this algorithm is not a typical RBF net, but a combination of "truncated" RBF and other types of hidden units. The algorithm uses random clustering and linear programming (LP) to design and train this "mixed" RBF net. Polynomial time complexity of the algorithm is proven and computational results are provided for the well known Mackey-Glass chaotic time series problem, the logistic map prediction problem, various neuro-control problems, and several time series forecasting problems. The algorithm can also be implemented as an online adaptive algorithm.
This paper presents a new learning algorithm for multitask pattern recognition (MTPR) problems. We consider learning multiple multiclass classification tasks online where no information is ever provided about the task category of a training example. The algorithm thus needs an automated task recognition capability to properly learn the different classification tasks. The learning mode is "online" where training examples for different tasks are mixed in a random fashion and given sequentially one after another. We assume that the classification tasks are related to each other and that both the tasks and their training examples appear in random during "online training." Thus, the learning algorithm has to continually switch from learning one task to another whenever the training examples change to a different task. This also implies that the learning algorithm has to detect task changes automatically and utilize knowledge of previous tasks for learning new tasks fast. The performance of the algorithm is evaluated for ten MTPR problems using five University of California at Irvine (UCI) data sets. The experiments verify that the proposed algorithm can indeed acquire and accumulate task knowledge and that the transfer of knowledge from tasks already learned enhances the speed of knowledge acquisition on new tasks and the final classification accuracy. In addition, the task categorization accuracy is greatly improved for all MTPR problems by introducing the reorganization process even if the presentation order of class training examples is fairly biased.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.