2019
DOI: 10.3390/e21020140
|View full text |Cite
|
Sign up to set email alerts
|

Applications of Information Theory in Solar and Space Physics

Abstract: Characterizing and modeling processes at the sun and space plasma in our solar system are difficult because the underlying physics is often complex, nonlinear, and not well understood. The drivers of a system are often nonlinearly correlated with one another, which makes it a challenge to understand the relative effects caused by each driver. However, entropy-based information theory can be a valuable tool that can be used to determine the information flow among various parameters, causalities, untangle the dr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
27
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 37 publications
(27 citation statements)
references
References 106 publications
0
27
0
Order By: Relevance
“…There are several ways in which this can be done, and this is currently an active area of research (Ali et al, ; Gao et al, ; Jiang & Wang, , e.g.). One way is to bin variables uniformly using bins of a predefined size for each variable (e.g.,Wing et al, ; Wing & Johnson, ). Sturges () proposed that the optimal bin size for a normal distribution is nb=log2false(nfalse)+1 and bin width w = range / n b , where n is the total number of measurements in the data set and range is the maximum‐minimum value of a variable.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…There are several ways in which this can be done, and this is currently an active area of research (Ali et al, ; Gao et al, ; Jiang & Wang, , e.g.). One way is to bin variables uniformly using bins of a predefined size for each variable (e.g.,Wing et al, ; Wing & Johnson, ). Sturges () proposed that the optimal bin size for a normal distribution is nb=log2false(nfalse)+1 and bin width w = range / n b , where n is the total number of measurements in the data set and range is the maximum‐minimum value of a variable.…”
Section: Methodsmentioning
confidence: 99%
“…A more detailed description of MI and other concepts of information theory and their application in space physics can be found in Wing and Johnson (). Below, two methods based on the MI that are used in this work are described.…”
Section: Machine Learning Backgroundmentioning
confidence: 99%
“…, M, are assumed to be equal its mean value (ϕ i ) = 0, µ i s (t) ≡ µ s (t), 1 ≤ s ≤ 4, moreover, taking into account stochastic representations of log-skew elliptical random vectors [39], the expressions for univariate and multivariate Shannon entropies (measured in nats) take the following forms [40]: σ 22 ,α 3 ,β 3 ,γ 3 ,σ 33 ,α 4 ,β 4 ,γ 4 ,σ 44 ,σ 12 ,σ 13 ,σ 14 ,σ 23 ,σ 24 ,σ 34 As we can see from Equation 45, the mutual information, I, is calculated directly by summing the individual entropies and subtracting the joint entropy. Mutual information, I, between two random variables, X s and X u , compares the uncertainty of measuring variables jointly with the uncertainty of measuring the two variables independently, identifies nonlinear dependence between two variables [41][42][43], and is non-negative and symmetrical. A generalization of bivariate mutual information to more than two variables have been analyzed in few different scenarios [20,21,[41][42][43].…”
Section: Information Measuresmentioning
confidence: 99%
“…Mutual information, I, between two random variables, X s and X u , compares the uncertainty of measuring variables jointly with the uncertainty of measuring the two variables independently, identifies nonlinear dependence between two variables [41][42][43], and is non-negative and symmetrical. A generalization of bivariate mutual information to more than two variables have been analyzed in few different scenarios [20,21,[41][42][43]. A direct multivariate extension of bivariate mutual information expressed by Equation 45to n variables X 1 , X 2 , and X n is named as the multi-information [44,45], also known as total correlation, and is defined by:…”
Section: Information Measuresmentioning
confidence: 99%
“…Here, we keep our work within the Information Theory domain and present an analysis of causality based on Transfer Entropy 33 , another measure which allows us to determine the causality between two series of data without being restricted to an underlying linear dynamics. This technique, which has been applied to different fields from biochemistry 34 to Earth 35 and Space sciences 36 , allows us to capture the exchange of information between two systems and the directionality of the flux, determining the influence of one variable into the other. While autoregressive models are limited to linear relations, Transfer Entropy is a general technique which can be used to analyze any system, being equivalent to Granger causality for Gaussian variables 37 .…”
mentioning
confidence: 99%