2021
DOI: 10.3389/fninf.2021.596443
|View full text |Cite
|
Sign up to set email alerts
|

A Quick and Easy Way to Estimate Entropy and Mutual Information for Neuroscience

Abstract: Calculations of entropy of a signal or mutual information between two variables are valuable analytical tools in the field of neuroscience. They can be applied to all types of data, capture non-linear interactions and are model independent. Yet the limited size and number of recordings one can collect in a series of experiments makes their calculation highly prone to sampling bias. Mathematical methods to overcome this so-called “sampling disaster” exist, but require significant expertise, great time and compu… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(10 citation statements)
references
References 50 publications
(109 reference statements)
0
9
0
Order By: Relevance
“…Further, we normalised I(X;Y) to scale the results between 0 (no mutual information) and 1 (perfect correlation). The normalisation uses the entropy H(X) of each individual signal, and can be calculated as (Kvålseth, 2017; Zbili & Rama, 2021): italicNMI()X;Ygoodbreak=I()X;YHX*HY. And H(X) of the discrete random variable X is calculated from its probability ( P[x] ) and surprise ( logP[x] ) as: H()Xgoodbreak=goodbreak−i=1nP()xiitaliclogP()xi. …”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Further, we normalised I(X;Y) to scale the results between 0 (no mutual information) and 1 (perfect correlation). The normalisation uses the entropy H(X) of each individual signal, and can be calculated as (Kvålseth, 2017; Zbili & Rama, 2021): italicNMI()X;Ygoodbreak=I()X;YHX*HY. And H(X) of the discrete random variable X is calculated from its probability ( P[x] ) and surprise ( logP[x] ) as: H()Xgoodbreak=goodbreak−i=1nP()xiitaliclogP()xi. …”
Section: Methodsmentioning
confidence: 99%
“…Further, we normalised I(X;Y) to scale the results between 0 (no mutual information) and 1 (perfect correlation). The normalisation uses the entropy H(X) of each individual signal, and can be calculated as (Kvålseth, 2017;Zbili & Rama, 2021):…”
Section: Mutual Informationmentioning
confidence: 99%
“…A video's entropy value was first calculated based on the sampled images from the video. As entropy values of images stay quite consistent across the video (Zbili & Rama, 2021), we used six equally spaced images to approximate a video's entropy. A total of 1,176 images were extracted from the 196 videos.…”
Section: Methodsmentioning
confidence: 99%
“…The first point is simply one that we may need to accept, even as computers become faster, it will likely remain the case that, for example, the Pearson correlation will be faster to compute than the corresponding MI. However, the computations will get faster in absolute terms as computers get faster, and other fields with large datasets, such as neuroscience, have seen the benefits of these new methods [ 43 , 44 ] with efficiency gains being made as well [ 45 , 46 ]. On the other hand software packages are becoming more readily available, and economics can benefit from the software advances that have been made in other fields.…”
Section: Limitations and Future Directionsmentioning
confidence: 99%