2017
DOI: 10.15598/aeee.v15i3.2174
|View full text |Cite
|
Sign up to set email alerts
|

Sleep Spindle Detection and Prediction Using a Mixture of Time Series and Chaotic Features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
30
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
5

Relationship

4
1

Authors

Journals

citations
Cited by 19 publications
(30 citation statements)
references
References 0 publications
0
30
0
Order By: Relevance
“…TN is related to the false condition that is classified correctly. FN is related to the false condition that is as true condition, and FP is the false condition that is classified as true condition [21]. Table 4 depicts that the highest amount of TP and TN, consequently the highest accuracies belong to the ERD mother wavelet with the DWPT-DFA.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…TN is related to the false condition that is classified correctly. FN is related to the false condition that is as true condition, and FP is the false condition that is classified as true condition [21]. Table 4 depicts that the highest amount of TP and TN, consequently the highest accuracies belong to the ERD mother wavelet with the DWPT-DFA.…”
Section: Discussionmentioning
confidence: 99%
“…Based on the DWPT-DFA method, the extracted features with different mother wavelets are fed into the SSVM classifier with the GRBF kernel. Regarding the previous studies [8,21,22,30] on the classifiers of NN, K-NN, SVM, RBF and GRBF in the EEG signal processing, we employ a SSVM with a GRBF for classifying the imagin ary movement features. The SSVM and GRBF are the amended algorithms of the traditional SVM and RBF to solve the limitations and add flexibility, such as the curse of dimension in the SVM, and to add flexibility to the RBF for different cases of data distribution [8].…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…The DSLVQ method computes and assigns weights to the features to indicate their importance. Three improvements are employed to improve the generalization of the LVQ: I-weights are calculated by an iterative training algorithm for each dimension based on distances; II-a vector of weights is produced for each feature in (9); and III-the iterative training is used to optimize and select one scalar weight for each feature in (10). The selected optimal weights estimate distances for the iterative learning (t) procedure [26]:…”
Section: Discriminative Sensitive Learning Vector Quantizationmentioning
confidence: 99%