2020
DOI: 10.48550/arxiv.2011.07035
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Continual Learning with Deep Artificial Neurons

Abstract: Neurons in real brains are enormously complex computational units. Among other things, they're responsible for transforming inbound electro-chemical vectors into outbound action potentials, updating the strengths of intermediate synapses, regulating their own internal states, and modulating the behavior of other nearby neurons. One could argue that these cells are the only things exhibiting any semblance of real intelligence. It is odd, therefore, that the machine learning community has, for so long, relied up… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 12 publications
0
2
0
Order By: Relevance
“…Works similar to ours have proposed to find a path of relevant weights to solve the task [38], freezing used weights, and limiting learning of new tasks. Others use different functions as components in the network, either Hypernetworks [39], Deep Artificial Neurons (DANs) [40], or Compositional Structures [41], so that the network components are more flexible during learning.…”
Section: Related Workmentioning
confidence: 99%
“…Works similar to ours have proposed to find a path of relevant weights to solve the task [38], freezing used weights, and limiting learning of new tasks. Others use different functions as components in the network, either Hypernetworks [39], Deep Artificial Neurons (DANs) [40], or Compositional Structures [41], so that the network components are more flexible during learning.…”
Section: Related Workmentioning
confidence: 99%
“…They showed that a single neuron modeled as a dendritic ANN with sparse connectivity could solve binary classification tasks (e.g., MNIST) and benefit from having the same input at different dendritic sites. Another approach utilizes multiple dendritic layers in each node of a deep ANN, and reduces catastrophic forgetting in a continual learning task [70]. To explore the potential advantages of human dCaAPs, a recent study used their respective transfer function in various deep learning architectures and found a small but significant increase in classification performance for various tasks, especially when used in combination with sparse connectivity resembling that of dendritic trees [71].…”
Section: Active Ionic Mechanisms and Dendritic Computationsmentioning
confidence: 99%