2015
DOI: 10.1162/neco_a_00713
|View full text |Cite
|
Sign up to set email alerts
|

Hardware-Amenable Structural Learning for Spike-Based Pattern Classification Using a Simple Model of Active Dendrites

Abstract: This letter presents a spike-based model that employs neurons with functionally distinct dendritic compartments for classifying high-dimensional binary patterns. The synaptic inputs arriving on each dendritic subunit are nonlinearly processed before being linearly integrated at the soma, giving the neuron the capacity to perform a large number of input-output mappings. The model uses sparse synaptic connectivity, where each synapse takes a binary value. The optimal connection pattern of a neuron is learned by … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
35
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
7
1

Relationship

4
4

Authors

Journals

citations
Cited by 23 publications
(36 citation statements)
references
References 73 publications
1
35
0
Order By: Relevance
“…Hussain et al implemented a model which clusters correlated synapses on the same dendritic branch with a hardware-friendly learning rule (Hussain et al, 2015). The proposed model attains comparable performance to Support Vector Machines and Extreme Learning Machines on binary classification benchmarks while using less computational resources.…”
Section: Introductionmentioning
confidence: 99%
“…Hussain et al implemented a model which clusters correlated synapses on the same dendritic branch with a hardware-friendly learning rule (Hussain et al, 2015). The proposed model attains comparable performance to Support Vector Machines and Extreme Learning Machines on binary classification benchmarks while using less computational resources.…”
Section: Introductionmentioning
confidence: 99%
“…Conventionally, at least D × L random weights are needed for the random projection operation in the first layer of ELM to get the hidden layer matrix H. However if the number of implemented hidden layer neurons is N (N < L), the hardware can only provide a D × N random projection matrix W comprising weights w ij (i = 1, 2, · · · , D and j = 1, 2, · · · , N). However, noting that we have a total of D × N random numbers on the chip, we can borrow concepts from combinatorics based learning [20][21][22] to realize that the total number N w of D-dimensional weight vectors we can make is given by:…”
Section: Technique Of Virtual Expansion By Weight Rotationmentioning
confidence: 99%
“…This architecture, which we refer to as Winner-Take-All employing Neurons with NonLinear Dendrites (WTA-NNLD), uses a novel branch-specific Spike Timing Dependent Plasticity based Network Rewiring (STDP-NRW) learning rule for its training. We have earlier presented [23] a branch-specific STDP rule for batch learning of a supervised classifier constructed of NNLDs. The primary differences of our current approach with [23] are:…”
Section: Introductionmentioning
confidence: 99%
“…We have earlier presented [23] a branch-specific STDP rule for batch learning of a supervised classifier constructed of NNLDs. The primary differences of our current approach with [23] are:…”
Section: Introductionmentioning
confidence: 99%