2018
DOI: 10.1101/316190
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Finding informative neurons in the brain using Multi-Scale Relevance

Abstract: We propose a metric -called Multi-Scale Relevance (MSR) -to score neurons for their prominence in encoding for the animal's behaviour that is being observed in a multi-electrode array recording experiment. The MSR assumes that relevant neurons exhibit a wide variability in their dynamical state, in response to the external stimulus, across different time scales. It is a non-parametric, fully featureless indicator, in that it uses only the time stamps of the firing activity, without resorting to any a priori co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
10
0

Year Published

2018
2018
2019
2019

Publication Types

Select...
3
1

Relationship

3
1

Authors

Journals

citations
Cited by 4 publications
(10 citation statements)
references
References 38 publications
0
10
0
Order By: Relevance
“…Within this picture, we found that DL achieves efficient data representations with maximal relevance at each level of resolution. Interestingly, maximal relevance has been shown to be an efficient criterium for extracting relevant variables in high dimensional data analysis also in other contexts [21,22]. The resolution of the representation at each layer of the architecture is determined in an unsupervised manner, depending on the data.…”
Section: Discussionmentioning
confidence: 99%
“…Within this picture, we found that DL achieves efficient data representations with maximal relevance at each level of resolution. Interestingly, maximal relevance has been shown to be an efficient criterium for extracting relevant variables in high dimensional data analysis also in other contexts [21,22]. The resolution of the representation at each layer of the architecture is determined in an unsupervised manner, depending on the data.…”
Section: Discussionmentioning
confidence: 99%
“…Eq. (20) then implies that H[E] increases by ν bits. The region ν > 0 corresponds to "redundant" representations, whereas for ν < 0, some informative bits are "lost in compression".…”
Section: The Thermodynamics Of Efficient Representationsmentioning
confidence: 99%
“…As such, we believe that our results may provide a guiding principle to extract relevant variables from high dimensional data (see e.g. [19,20]), or to shed light on the principles underlying deep learning (see e.g. [21,22]).…”
mentioning
confidence: 95%
“…still follows the Gibbs distribution Eq. (19), but the value of β is dominated by the variables t if γ v < γ y and by the variables z otherwise 9 . Therefore, the most relevant set of variables are those with the smallest value of γ.…”
Section: Discussionmentioning
confidence: 99%