2021
DOI: 10.3390/e23060641
|View full text |Cite
|
Sign up to set email alerts
|

Neural Estimator of Information for Time-Series Data with Dependency

Abstract: Novel approaches to estimate information measures using neural networks are well-celebrated in recent years both in the information theory and machine learning communities. These neural-based estimators are shown to converge to the true values when estimating mutual information and conditional mutual information using independent samples. However, if the samples in the dataset are not independent, the consistency of these estimators requires further investigation. This is of particular interest for a more comp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 37 publications
0
5
0
Order By: Relevance
“…The latter, in particular, showed that MINE is minimax optimal under appropriate regularity assumptions on the distributions (see also [31] for formal limitations on MINE performance). For data with memory, [32] leveraged MINE for transfer entropy, while [33] constructed a conditional MI estimator and extended it to DI between 1st order Markov processes.…”
Section: Estimation and Optimization Of Directed Informationmentioning
confidence: 99%
“…The latter, in particular, showed that MINE is minimax optimal under appropriate regularity assumptions on the distributions (see also [31] for formal limitations on MINE performance). For data with memory, [32] leveraged MINE for transfer entropy, while [33] constructed a conditional MI estimator and extended it to DI between 1st order Markov processes.…”
Section: Estimation and Optimization Of Directed Informationmentioning
confidence: 99%
“…The mutual information NE (MINE) was proposed in [BBR + 18], and has seen various improvements since [POvdO + 18, SE19, CABH + 19, MMD + 21]. Extensions of the neural estimation approach to directed information were studied in [MGBS21,TAGP23b,TAGP23a]. Theoretical guarantees for f -divergence NEs, accounting for approximation and estimation errors, as we do here, were developed in [SSG21,SG22] (see also [NWJ10] for a related approach based on reproducing kernel Hilbert space parameterization).…”
Section: Introductionmentioning
confidence: 99%
“…The simplicity of the training method in reservoir networks attracts researchers from related scientific fields. Most of these studies are related to traditional applications of machine learning methods: pattern recognition [7], system approximation [8], adaptive data filtering [9]. To improve the classification accuracy and enhance the approximation capabilities of reservoir networks, some studies employ a combination of multiple reservoirs, which increases the computational resource requirements.…”
Section: Introductionmentioning
confidence: 99%