2016 International Joint Conference on Neural Networks (IJCNN) 2016
DOI: 10.1109/ijcnn.2016.7727621
|View full text |Cite
|
Sign up to set email alerts
|

Study of Learning Entropy for onset detection of epileptic seizures in EEG time series

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 15 publications
0
6
0
Order By: Relevance
“…Later on, cumulative sums were proposed for practical approximations of LE, ie, LE(k)=1nα·nwαi=1nwΓ|wi|>α·|wi|true‾, where Γ= 1,if3ptfalse|boldwifalse|>α·truefalse|boldwifalse|0,otherwise is an indicator function, and truefalse|boldwifalse| is the mean absolute value of the recent history of the i th weight adaptation increment (for details, see Section 3) and α is the multiscale sensitivity parameter α ∈ { α 1 , α 2 ,…, α n }, α 1 < α 2 < ⋯ < α n . Figure recalls an intriguing performance of LE performed with significant robustness against false detection and with much lower computational complexity than the sample entropy …”
Section: Learning Entropymentioning
confidence: 72%
See 2 more Smart Citations
“…Later on, cumulative sums were proposed for practical approximations of LE, ie, LE(k)=1nα·nwαi=1nwΓ|wi|>α·|wi|true‾, where Γ= 1,if3ptfalse|boldwifalse|>α·truefalse|boldwifalse|0,otherwise is an indicator function, and truefalse|boldwifalse| is the mean absolute value of the recent history of the i th weight adaptation increment (for details, see Section 3) and α is the multiscale sensitivity parameter α ∈ { α 1 , α 2 ,…, α n }, α 1 < α 2 < ⋯ < α n . Figure recalls an intriguing performance of LE performed with significant robustness against false detection and with much lower computational complexity than the sample entropy …”
Section: Learning Entropymentioning
confidence: 72%
“…Comparison of Learning Entropy (LE) vs Sample Entropy (SE) for novelty detection in electroencephalogram (EEG) signal with an epileptic seizure; LE performs more robust seizure onset detection of a seizure with a lower level of computational complexity (adopted and modified from the work of Bukovsky et al)…”
Section: Learning Entropymentioning
confidence: 99%
See 1 more Smart Citation
“…Novelty detection via LE is based on real-time learning of systems after they had been pretrained on an initial pretraining data set. Aside from the founding work [34], other examples of works that indicate the usefulness of LE in biomedical or technical data and for novelty detection in data with concept drift can be found in [36][37][38][39].…”
Section: Introductionmentioning
confidence: 99%
“…The Learning Entropy reflects the effort of the adaptive system to learn the novelty in the data. Authors in their later works extend original algorithm with prediction error evaluation [5] or with the different technique of Learning Entropy estimation [6] and shows, that evaluation of weight increments can be a useful approach to novelty detection topic [7], [8].…”
Section: Introductionmentioning
confidence: 99%