Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP '98 (Cat. No.98CH36181
DOI: 10.1109/icassp.1998.681442
|View full text |Cite
|
Sign up to set email alerts
|

Information-theoretic analysis of neural coding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
74
0
1

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 39 publications
(75 citation statements)
references
References 34 publications
0
74
0
1
Order By: Relevance
“…The Kullback-Leibler distance characterizes how well two responses can be distinguished by an optimal classifer [2]. Kullback's result allows teasing apart components of a measured distance into those required for a rate change and those that might be required to reflect an interval distribution change and a dependence change.…”
Section: Discussionmentioning
confidence: 99%
“…The Kullback-Leibler distance characterizes how well two responses can be distinguished by an optimal classifer [2]. Kullback's result allows teasing apart components of a measured distance into those required for a rate change and those that might be required to reflect an interval distribution change and a dependence change.…”
Section: Discussionmentioning
confidence: 99%
“…One was added systematically to all bins before the normalization of distributions that would be in the denominator of a ratio to avoid division by zero; when sample number n Ͼ Ͼ 1, this is analogous to assuming a uniform previous distribution and then finding the mean of the posterior probability distribution by using Bayes' rule (cf. Gelman, 1995;Johnson et al, 2001). According to Bayes' rule, the previous distribution is multiplied by a likelihood and divided by the probability of all data to give the posterior distribution.…”
Section: Methodsmentioning
confidence: 99%
“…For our purposes this is advantageous because it allows us to quantify the potential asymmetry in observing changes in different directions, i.e., comparing low-to-high variance changes with high-to-low variance changes. D KL quantifies the intrinsic classification difficulty (Johnson et al, 2001). Although D KL does not give the absolute probability of error for a specific decoder, it does limit the ultimate performance of any classifier; a one-bit increase in D KL corresponds to a twofold decrease in error probability.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations