2014
DOI: 10.1063/1.4903714
|View full text |Cite
|
Sign up to set email alerts
|

Survey on the estimation of mutual information methods as a measure of dependency versus correlation analysis

Abstract: Abstract. In this survey, we present and compare different approaches to estimate Mutual Information (MI) from data to analyse general dependencies between variables of interest in a system. We demonstrate the performance difference of MI versus correlation analysis, which is only optimal in case of linear dependencies. First, we use a piece-wise constant Bayesian methodology using a general Dirichlet prior. In this estimation method, we use a two-stage approach where we approximate the probability distributio… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 8 publications
0
9
0
Order By: Relevance
“…However, a potential problem with traditional correlational analyses is that they are not optimal at estimating nonlinear dependencies (Gencaga et al 2014), which are often observed in neural networks. This drawback can be addressed by using a dependence measure known as mutual information (MI) (Cellucci et al 2005;Shannon 1948).…”
Section: Introductionmentioning
confidence: 99%
“…However, a potential problem with traditional correlational analyses is that they are not optimal at estimating nonlinear dependencies (Gencaga et al 2014), which are often observed in neural networks. This drawback can be addressed by using a dependence measure known as mutual information (MI) (Cellucci et al 2005;Shannon 1948).…”
Section: Introductionmentioning
confidence: 99%
“…This particular product was obtained by the best relevant set of variables as determined by the highest mutual information value between the two variables. 3,4,5,6,7,8,9,10,11,12,13,15 0.756 0.924 1,2,4,5,6,8,10,11,12,13,15 0.756 0.921 1,2,4,5,7,8,10,11,12,13,15 0.755 0.923 2, 3,4,5,7,8,9,10,11,12,13,15 0.755 0.924…”
Section: Results and Conclusionmentioning
confidence: 99%
“…This relation shows the generality of the normalized correlation measure. There are several methods to estimate MI from data [15,20,5]. We applied the Variable Bin Width Histogram Approach [3,21] to compute the MI between the observed and predicted AOD.…”
Section: Mutual Informationmentioning
confidence: 99%
“…Furthermore, traditional correlation analysis, such as Pearson correlation, has been shown to be optimal only in the case of linear dependencies between variables, while MI can be applied to random variables that exhibit linear as well as nonlinear dependencies (Gencaga et al 2014), which are often demonstrated in neural networks (Schöner and Kelso 1988). Comparisons of marginal entropy, meaning the average amount of information provided by, in this case, a single spike time series, have also been used to measure the relative information content of time series in the analysis of bursting and tonic firing in the thalamus (Reinagel et al 1999).…”
Section: Discussionmentioning
confidence: 99%