ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2019
DOI: 10.1109/icassp.2019.8682642
|View full text |Cite
|
Sign up to set email alerts
|

Squared-Loss Mutual Information via High-Dimension Coherence Matrix Estimation

Abstract: Squared-loss mutual information (SMI) is a surrogate of Shannon mutual information that is more advantageous for estimation. On the other hand, the coherence matrix of a pair of random vectors, a power-normalized version of the sample cross-covariance matrix, is a well-known second-order statistic found in the core of fundamental signal processing problems, such as canonical correlation analysis (CCA). This paper shows that SMI can be estimated from a pair of independent and identically distributed (i.i.d.) sa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
2
2

Relationship

4
0

Authors

Journals

citations
Cited by 4 publications
(9 citation statements)
references
References 13 publications
0
9
0
Order By: Relevance
“…5) The proposal of a reduced complexity approximate estimator resorting to the asymptotic behavior of Toeplitz matrices. Concerning the related works, the presented approach is an extension of the main ideas shortly provided by the authors in [7]. Although the interest on surrogates of entropy, KL divergence and mutual information, such as Rényi entropy, Rényi divergence, f -divergence and chi-squared (χ 2 ) divergence has a long and rich history (see [3] and references therein), its use for data analytics has been particularly focused in [4].…”
Section: A Main Contributions Related Work and Overall Organizationmentioning
confidence: 99%
See 3 more Smart Citations
“…5) The proposal of a reduced complexity approximate estimator resorting to the asymptotic behavior of Toeplitz matrices. Concerning the related works, the presented approach is an extension of the main ideas shortly provided by the authors in [7]. Although the interest on surrogates of entropy, KL divergence and mutual information, such as Rényi entropy, Rényi divergence, f -divergence and chi-squared (χ 2 ) divergence has a long and rich history (see [3] and references therein), its use for data analytics has been particularly focused in [4].…”
Section: A Main Contributions Related Work and Overall Organizationmentioning
confidence: 99%
“…KL divergence is non-negative, and it is zero if and only if p X (x) = q X (x). A natural surrogate of KL divergence can be obtained by applying the Jensen inequality to (7). In particular, as ln(.)…”
Section: B Chi-squared Divergence Surrogatementioning
confidence: 99%
See 2 more Smart Citations
“…This line of research finds numerous applications in data science and machine learning. Recently, the problem of estimating information has been linked in [2] with the aforementioned problem of coherence estimation by mapping the bivariate data onto a high-dimensional feature space based on the empirical characteristic function. In particular, the Frobenius norm of the coherence matrix computed after this high-dimensional mapping converges with M to the so-called squared-loss mutual information [14].…”
Section: Introductionmentioning
confidence: 99%