2017 International Joint Conference on Neural Networks (IJCNN) 2017
DOI: 10.1109/ijcnn.2017.7965898
|View full text |Cite
|
Sign up to set email alerts
|

Neurogenesis deep learning: Extending deep networks to accommodate new classes

Abstract: Neural machine learning methods, such as deep neural networks (DNN), have achieved remarkable success in a number of complex data processing tasks. These methods have arguably had their strongest impact on tasks such as image and audio processing -data processing domains in which humans have long held clear advantages over conventional algorithms. In contrast to biological neural systems, which are capable of learning continuously, deep artificial networks have a limited ability for incorporating new informati… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
37
0
2

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 66 publications
(39 citation statements)
references
References 25 publications
0
37
0
2
Order By: Relevance
“…Other incremental methods exist, based e.g. on incremental training of an auto-encoder; new neurons are added in response to a high rate of failure with the new data [31] or based on reconstruction error [5]. AdaNet [3] gradually extends its network by evaluation and selection among candidate sub-networks.…”
Section: Related Workmentioning
confidence: 99%
“…Other incremental methods exist, based e.g. on incremental training of an auto-encoder; new neurons are added in response to a high rate of failure with the new data [31] or based on reconstruction error [5]. AdaNet [3] gradually extends its network by evaluation and selection among candidate sub-networks.…”
Section: Related Workmentioning
confidence: 99%
“…There are a variety of methodologies for tackling the continual learning problem [Parisi et al, 2019]. Existing approaches can be roughly divided into three categories: those that focus on consolidating synapses in a neural network that are critical to retain previous knowledge [Kirkpatrick et al, 2017;Zenke et al, 2017;Aljundi et al, 2018;; those that employ either an explicit memory buffer or a generative model to be able to interleave the learning of new knowledge with that based on stored or generated data for the old tasks [McClelland et al, 1995;Ans et al, 2004;Atkinson et al, 2018;; and those that dynamically change the structure of the neural network to accommodate new knowledge [Rusu et al, 2016;Draelos et al, 2017;Yoon et al, 2017]. Here, we address the challenge of continual learning by intelligently adding new neurons to an existing neural network (neurogenesis) so that it learns to solve a new task without forgetting how to solve previously learned tasks.…”
Section: Introductionmentioning
confidence: 99%
“…In contrast, our approach learns where and how many neurons to add. Draelos et al [2017] proposed the neurogenesis deep learning (NDL) model to incrementally train an autoencoder on new MNIST digits by adding new neurons at each layer and employing a pseudo-rehearsal process called "intrinsic replay" for preserve the performance on the old MNIST digits. Yoon et al [2017] proposed a dynamically expanding network (DEN) that selectively adapts network weights and also expand network structure at each layer as needed using group sparse regularization in an online manner to facilitate the sequential learning of tasks.…”
Section: Introductionmentioning
confidence: 99%
“…Mas, esse significado mudou ao longo do tempo [Bengio et al 2009, LeCun et al 2015, Draelos et al 2017. Em 2012, 10 camadas já eram suficientes para considerar uma rede neural como profunda.…”
Section: Introductionunclassified
“…Em 2012, 10 camadas já eram suficientes para considerar uma rede neural como profunda. Contudo, em 2017é mais comum considerar uma rede neural profunda quando esta possui mais de 100 camadas [Draelos et al 2017]. Deep Learning foi aplicado em diversos domínios como imagem, texto, vídeo, pronúncia e visão, melhorando significativamente os melhores resultados alcançados em dezenas de anos.…”
Section: Introductionunclassified