2020
DOI: 10.1073/pnas.1912804117
|View full text |Cite
|
Sign up to set email alerts
|

Learning probabilistic neural representations with randomly connected circuits

Abstract: The brain represents and reasons probabilistically about complex stimuli and motor actions using a noisy, spike-based neural code. A key building block for such neural computations, as well as the basis for supervised and unsupervised learning, is the ability to estimate the surprise or likelihood of incoming high-dimensional neural activity patterns. Despite progress in statistical modeling of neural responses and deep learning, current approaches either do not scale to large neural populations or cannot be i… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
53
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4

Relationship

2
7

Authors

Journals

citations
Cited by 34 publications
(53 citation statements)
references
References 60 publications
(79 reference statements)
0
53
0
Order By: Relevance
“…Our point here about the theoretical utility of random sequential neural codes extends the literature on how neural circuits can exploit random designs to perform interesting computational functions, such as separable neural representations (Fusi, Miller, and Rigotti 2016; Rigotti et al 2013; Babadi and Sompolinsky 2014; Lindsay et al 2017), short term memory of input patterns and dynamics (S. Ganguli and Sompolinsky 2010; Charles, Yin, and Rozell 2017; Jaeger and Haas 2004; Bouchacourt and Buschman 2019), and unsupervised learning of the structure of input signals (Maoz et al 2020).…”
Section: Discussionmentioning
confidence: 61%
“…Our point here about the theoretical utility of random sequential neural codes extends the literature on how neural circuits can exploit random designs to perform interesting computational functions, such as separable neural representations (Fusi, Miller, and Rigotti 2016; Rigotti et al 2013; Babadi and Sompolinsky 2014; Lindsay et al 2017), short term memory of input patterns and dynamics (S. Ganguli and Sompolinsky 2010; Charles, Yin, and Rozell 2017; Jaeger and Haas 2004; Bouchacourt and Buschman 2019), and unsupervised learning of the structure of input signals (Maoz et al 2020).…”
Section: Discussionmentioning
confidence: 61%
“…One example of this class of SNNs is a model for neural coding that is inspired by sparse coding and random connectivity of by real neural circuits where structural changes in the random connectivity induced by pruning process. In this work [44] random sparse connectivity has been presented as a key principle of cortical computation. In order to implement pruning, a STDP (Spike-Time Dependent Plasticity) model has been proposed [45].…”
Section: Discussionmentioning
confidence: 99%
“…Alternatives such as inverse methods based on Ising models utilize time-consuming learning schemes (Tkacik et al, 2006) though recently faster algorithms have been proposed (Cocco et al, 2009, Maoz et al, 2020). Other approaches applicable to spike trains include generalized linear models (Pillow et al, 2008) or spike train cross-correlograms (English et al, 2017).…”
Section: Discussionmentioning
confidence: 99%
“…Existing methods for estimating functional interactions between multi-dimensional time series include linear regression (79), Granger causality (GC) (80), and inter-areal coherence (81,82). (16,88). Other approaches applicable to spike trains include generalized linear models (89) or spike train cross-correlograms (90).…”
Section: Estimating Functional Connectivitymentioning
confidence: 99%