2023
DOI: 10.1007/s11071-023-08298-w
|View full text |Cite
|
Sign up to set email alerts
|

Novel techniques for improving NNetEn entropy calculation for short and noisy time series

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 12 publications
(16 citation statements)
references
References 52 publications
0
16
0
Order By: Relevance
“…Interested readers are referred to the work of Heidari et al. (2023) where a detailed description of constant offset, B , is given. The offset value is estimated by averaging all the AE index time series investigated (B=370) $(B=370)$.Then a signal preprocessing is applied to the AE index using Equation .…”
Section: Data Acquisition and Methods Of Analysismentioning
confidence: 99%
See 2 more Smart Citations
“…Interested readers are referred to the work of Heidari et al. (2023) where a detailed description of constant offset, B , is given. The offset value is estimated by averaging all the AE index time series investigated (B=370) $(B=370)$.Then a signal preprocessing is applied to the AE index using Equation .…”
Section: Data Acquisition and Methods Of Analysismentioning
confidence: 99%
“…The NNetEn algorithm estimates the entropy in the following stages: the stage 1 involves loading the time series X = ( x 1 , x 2 , x 3 , … , x N ) into the reservoir. In the reservoir, transformation of the input vector ( Y ) through a time series ( X ) is achieved by filling the matrix of the reservoir with the time series using row‐wise filling time series stretching, method 3 (Heidari et al., 2023). The maximum length of the time series ( N max ) is determined by the numbers of elements in the reservoir ( N max = Y max × P max = 19,625).…”
Section: Data Acquisition and Methods Of Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…The PRE summary statistic is based on Shannon's entropy. 24,25,[32][33][34][35][36][37][38][39] It is defined as follows:…”
Section: Theorymentioning
confidence: 99%
“…The PRE summary statistic is based on Shannon's entropy 24,25,32–39 . It is defined as follows: PREgoodbreak=goodbreak−i=1nnormalp()xi0.25emlog2normalp()xi, where normalp()xi is the “probability” of observing a signal (intensity value) in a spectrum.…”
Section: Theorymentioning
confidence: 99%