■ We present the broad outlines of a roadmap toward human-level artificial general intelligence (henceforth, AGI). We begin by discussing AGI in general, adopting a pragmatic goal for its attainment and a necessary foundation of characteristics and requirements. An initial capability landscape will be presented, drawing on major themes from developmental psychology and illuminated by mathematical, physiological, and information-processing perspectives. The challenge of identifying appropriate tasks and environments for measuring AGI will be addressed, and seven scenarios will be presented as milestones suggesting a roadmap across the AGI landscape along with directions for future research and collaboration.This article is the result of an ongoing collaborative effort by the coauthors, preceding and during the AGI Roadmap Workshop held at the University of Of course, this is far from the first attempt to plot a course toward humanlevel AGI: arguably this was the goal of the founders of the field of artificial intelligence in the 1950s, and has been pursued by a steady stream of AI researchers since, even as the majority of the AI field has focused its attention on more narrow, specific subgoals. The ideas presented here build on the ideas of others in innumerable ways, but to review the history of AI
Catastrophic forgetting is a well-studied attribute of most parameterized supervised learning systems. A variation of this phenomenon, in the context of feedforward neural networks, arises when nonstationary inputs lead to loss of previously learned mappings. The majority of the schemes proposed in the literature for mitigating catastrophic forgetting were not data driven and did not scale well. We introduce the fixed expansion layer (FEL) feedforward neural network, which embeds a sparsely encoding hidden layer to help mitigate forgetting of prior learned representations. In addition, we investigate a novel framework for training ensembles of FEL networks, based on exploiting an information-theoretic measure of diversity between FEL learners, to further control undesired plasticity. The proposed methodology is demonstrated on a basic classification task, clearly emphasizing its advantages over existing techniques. The architecture proposed can be enhanced to address a range of computational intelligence tasks, such as regression problems and system control.
Abstract-Catastrophic forgetting (or catastrophic interference) in supervised learning systems is the drastic loss of previously stored information caused by the learning of new information. While substantial work has been published on addressing catastrophic forgetting in memoryless supervised learning systems (e.g. feedforward neural networks), the problem has received limited attention in the context of dynamic systems, particularly recurrent neural networks. In this paper, we introduce a solution for mitigating catastrophic forgetting in RNNs based on enhancing the Fixed Expansion Layer (FEL) neural network which exploits sparse coding of hidden neuron activations. Simulation results on several non-stationary data sets clearly demonstrate the effectiveness of the proposed architecture.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.