2019
DOI: 10.1109/tcyb.2018.2801476
|View full text |Cite
|
Sign up to set email alerts
|

Event-Driven Continuous STDP Learning With Deep Structure for Visual Pattern Recognition

Abstract: Human beings can achieve reliable and fast visual pattern recognition with limited time and learning samples. Underlying this capability, ventral stream plays an important role in object representation and form recognition. Modeling the ventral steam may shed light on further understanding the visual brain in humans and building artificial vision systems for pattern recognition. The current methods to model the mechanism of ventral stream are far from exhibiting fast, continuous, and event-driven learning like… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
20
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 31 publications
(20 citation statements)
references
References 48 publications
0
20
0
Order By: Relevance
“…Examples include the use of convolutional layers [28,13,29,30] (and tables therein), dendritic computations [31,32,12] or backpropagation approximations such as feedback alignment [11,33,34,35,36,14] equilibrium propagation [37], membrane potential based backpropagation [38], restricted Boltzmann machines and deep belief networks [39,40], (localized) difference target propagation [41,14], using reinforcement-signals [42,43] or approaches using predictive coding [44]. Many models implement spiking neurons to stress bio-plausibility [45,46,47,48,49,13] (and tables therein) or coding efficiency [50]. The conversion of DNNs to spiking neural networks (SNN) after training with backpropagation [51] is a common technique to evade the difficulties of training with spikes.…”
Section: Related Workmentioning
confidence: 99%
“…Examples include the use of convolutional layers [28,13,29,30] (and tables therein), dendritic computations [31,32,12] or backpropagation approximations such as feedback alignment [11,33,34,35,36,14] equilibrium propagation [37], membrane potential based backpropagation [38], restricted Boltzmann machines and deep belief networks [39,40], (localized) difference target propagation [41,14], using reinforcement-signals [42,43] or approaches using predictive coding [44]. Many models implement spiking neurons to stress bio-plausibility [45,46,47,48,49,13] (and tables therein) or coding efficiency [50]. The conversion of DNNs to spiking neural networks (SNN) after training with backpropagation [51] is a common technique to evade the difficulties of training with spikes.…”
Section: Related Workmentioning
confidence: 99%
“…Lateral inhibition and adaptive threshold voltages are required in ref. [22] as well. Other SNNs have similar requirements.…”
Section: Introductionmentioning
confidence: 94%
“…In ref. [22], the spike time of an input neuron is inversely proportional to the input signal. In this paper, their relationship is described as follows:…”
Section: Input Layermentioning
confidence: 99%
See 2 more Smart Citations