2015
DOI: 10.1109/tit.2014.2370058
|View full text |Cite
|
Sign up to set email alerts
|

Measures of Entropy From Data Using Infinitely Divisible Kernels

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
54
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 87 publications
(57 citation statements)
references
References 17 publications
3
54
0
Order By: Relevance
“…However, like all other information theoretic quantities, TE is defined in terms of the probability distributions of the system under study, that in practice need to be estimated from data. Probability estimation is a challenging task, and it can significantly affect the outcome of information theory analyses, including the computation of TE (Giraldo et al, 2015; Cekic et al, 2018; Timme and Lapish, 2018). Current methods that successfully estimate TE are based on a local approximation of the probability distributions from nearest neighbor distances (Kraskov et al, 2004; Lindner et al, 2011), or on symbolization schemes that then allow the probabilities to be estimated from the symbols' relative frequencies (Dimitriadis et al, 2016).…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…However, like all other information theoretic quantities, TE is defined in terms of the probability distributions of the system under study, that in practice need to be estimated from data. Probability estimation is a challenging task, and it can significantly affect the outcome of information theory analyses, including the computation of TE (Giraldo et al, 2015; Cekic et al, 2018; Timme and Lapish, 2018). Current methods that successfully estimate TE are based on a local approximation of the probability distributions from nearest neighbor distances (Kraskov et al, 2004; Lindner et al, 2011), or on symbolization schemes that then allow the probabilities to be estimated from the symbols' relative frequencies (Dimitriadis et al, 2016).…”
Section: Introductionmentioning
confidence: 99%
“…Current methods that successfully estimate TE are based on a local approximation of the probability distributions from nearest neighbor distances (Kraskov et al, 2004; Lindner et al, 2011), or on symbolization schemes that then allow the probabilities to be estimated from the symbols' relative frequencies (Dimitriadis et al, 2016). Nonetheless, obtaining TE directly from data, without the intermediate step of probability estimation, as has been achieved for other information theoretic quantities (Giraldo et al, 2015), is desirable.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…where A(C, P) = C,P F C F P F is exactly the kernel alignment cost function [12,54]. Note that the distance in Equation 6 can be implemented also with more advanced differentiable measures of (dis)similarity between positive-definite matrices, such as divergence and mutual information [14,32]. However, these options are not explored in this paper and are left for future research.…”
Section: Deep Kernelized Autoencodersmentioning
confidence: 99%
“…is a test statistic that is proportional to Rényi's quadratic entropy [13] under the model defined by P . Thus, for succinctness, we refer to Eq.…”
Section: Kernel-based Quadratic Entropymentioning
confidence: 99%