2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob) 2015
DOI: 10.1109/devlrn.2015.7346155
|View full text |Cite
|
Sign up to set email alerts
|

Biologically inspired incremental learning for high-dimensional spaces

Abstract: International audienceWe propose an incremental, highly parallelizable, and constant-time complexity neural learning architecture for multi-class classification (and regression) problems that remains resource-efficient even when the number of input dimensions is very high (≥ 1000). This so-called projection-prediction (PROPRE) architecture is strongly inspired by biological information processing in that it uses a prototype-based, topologically organized hidden layer trained with the SOM learning rule that upd… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 16 publications
0
5
0
Order By: Relevance
“…1, to address the proposed fusion problem. It is conceptually similar to several of our previous works in multi-sensory fusion [17], incremental learning [18][19][20] and developmental learning [21,22]. In the light of the evaluation criteria for multi-sensory fusion approaches proposed in Sec.…”
Section: Approachmentioning
confidence: 68%
See 1 more Smart Citation
“…1, to address the proposed fusion problem. It is conceptually similar to several of our previous works in multi-sensory fusion [17], incremental learning [18][19][20] and developmental learning [21,22]. In the light of the evaluation criteria for multi-sensory fusion approaches proposed in Sec.…”
Section: Approachmentioning
confidence: 68%
“…This is a prototype-based generative algorithm which therefore permits the detection of outliers and changes in input statistics. Apart from approximating the distribution of input samples by prototype vectors, it implements a topology-preserving mapping of the computed prototypes, which is of no direct concern here, but will become extremely important when incremental learning is concerned [19,18]. As incremental learning is the logical next step following successful change detection, we feel that the use of the SOM model is very well justified in this case.…”
Section: Approachmentioning
confidence: 99%
“…• the architecture is conceptually simple, efficient to execute and highly parallelizable as we could show in a previous publication [24] • we present an intuitive way to benchmark incremental learning and apply it to our architecture using two challenging perceptual problems, with excellent results regarding incremental learning per-formance, which never degrades overall precision by more than a few percents.…”
Section: Significance Of Contributionmentioning
confidence: 76%
“…Given that units in a competitive learning system like the SOM tend to specialize for certain types of patterns, i.e., data prototypes, such a system appears to be suited for combating catastrophic forgetting (since, in theory, competing units would lead to reduced neural cross-talk, a source of forgetting (French 1999)). Furthermore, approaches, such as (Gepperth et al 2015), have advocated the use of SOMs for continual learning. However, the SOM in its purest form, as we observe in our experiments, is itself prone to forgetting (Mermillod, Bugaiska, and Bonin 2013).…”
Section: Introductionmentioning
confidence: 99%