2019
DOI: 10.1109/msp.2019.2933846
|View full text |Cite
|
Sign up to set email alerts
|

Neuroscience-Inspired Online Unsupervised Learning Algorithms: Artificial Neural Networks

Abstract: Although the currently popular deep learning networks achieve unprecedented performance on some tasks, the human brain still has a monopoly on general intelligence. Motivated by this and biological implausibility of deep learning networks, we developed a family of biologically plausible artificial neural networks (NNs) for unsupervised learning. Our approach is based on optimizing principled objective functions containing a term that matches the pairwise similarity of outputs to the similarity of inputs, hence… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
29
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 43 publications
(29 citation statements)
references
References 36 publications
0
29
0
Order By: Relevance
“…While not explicitly our goal here, translating principles of neural computation to improve machine learning is a long-sought goal at the interface of computer science and neuroscience (39,(95)(96)(97). Specifically, these insights may be applicable for deep learning (98) [e.g., to enhance attentionmodulated artificial neural networks (99,100)], for online similarity search problems, and in electronic noses for event-based smell processing.…”
Section: Discussionmentioning
confidence: 99%
“…While not explicitly our goal here, translating principles of neural computation to improve machine learning is a long-sought goal at the interface of computer science and neuroscience (39,(95)(96)(97). Specifically, these insights may be applicable for deep learning (98) [e.g., to enhance attentionmodulated artificial neural networks (99,100)], for online similarity search problems, and in electronic noses for event-based smell processing.…”
Section: Discussionmentioning
confidence: 99%
“…More Neuroscience-oriented approaches attempt to find which learning rules could implement a biologically plausible version of backpropagation [26,27]. In contrast to most works described previously relying on numerical optimization to find learning rules, others analytically develop and infer learning rules that can elicit certain biologically inspired functions [7,8,[28][29][30].…”
Section: Related Workmentioning
confidence: 99%
“…Research in ToDL attempts to demystify the various hidden transformations of deep architectures to provide a theoretical guarantee and understanding on the learning, approximation, optimization, and generalization capability of deep networks-and their variants-such as FNNs, convolutional neural networks (CNNs) [26], [27], recurrent neural networks (RNNs) [28], [29], autoencoders (AEs) [30]- [32], generative adversarial networks (GANs) [33]- [35], ResNet [36], and DenseNet [37], [38]. To interpret/explain learning [39], approximation [40], optimization [41], [42], and generalization [43] in these deep networks employed for classification [44], [45] and regression problems [46], advancements in ToDL have been made via numerous frameworks such as mean field theory [47]- [49], random matrix theory [50], [51], tensor factorization [52], [53], optimization theory [54]- [57], kernel learning [58]- [60], linear algebra [61], [62], spline theory [63], [64], theoretical neuroscience [65]- [67], highdimensional probability and statistics [68]- [70], manifold theory [48], [71], Fourier analysis [72], and scattering networks (vis-à-vis a wavelet transform)…”
Section: A Related Work and Motivationmentioning
confidence: 99%
“…implemented by adding the outputs of n Φ× (w,x) D,ε -fed with (w i , x i )-via an additional output layer that comprises one neuron. Accordingly, (77) where (i) follows from (67) and the assignment ε =εn; (ii) follows from these assignments:…”
Section: Appendix G Proof Of Theoremmentioning
confidence: 99%
See 1 more Smart Citation