2017
DOI: 10.1080/00207721.2017.1340986
|View full text |Cite
|
Sign up to set email alerts
|

High-resolution time–frequency representation of EEG data using multi-scale wavelets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
9

Relationship

5
4

Authors

Journals

citations
Cited by 23 publications
(14 citation statements)
references
References 41 publications
0
14
0
Order By: Relevance
“…The absence of this crucial information might lead to a model structure which cannot sufficiently represent the inherent dynamics of the data (and therefore the associated system) especially when the system is not persistently excited. It is known that most physical systems behave mainly as a low-pass filter, and are actually defined on the subspace of 0, , that is, the Sobolev space 0, , 0, , 0, ∈ 0, | ∈ 0, , 1,2, ⋯ , , where the weak derivatives up to -th are also integrable [17]. Thus, a stricter metric, which can reveal the entire useful information of observations realized in the Sobolev space, is used in this study.…”
Section: B the Urols Algorithm For Tvarx Model Identification In Tf-mentioning
confidence: 99%
See 1 more Smart Citation
“…The absence of this crucial information might lead to a model structure which cannot sufficiently represent the inherent dynamics of the data (and therefore the associated system) especially when the system is not persistently excited. It is known that most physical systems behave mainly as a low-pass filter, and are actually defined on the subspace of 0, , that is, the Sobolev space 0, , 0, , 0, ∈ 0, | ∈ 0, , 1,2, ⋯ , , where the weak derivatives up to -th are also integrable [17]. Thus, a stricter metric, which can reveal the entire useful information of observations realized in the Sobolev space, is used in this study.…”
Section: B the Urols Algorithm For Tvarx Model Identification In Tf-mentioning
confidence: 99%
“…3) Select the significant term with the largest value as the first term and remove the selected expanded terms from the candidate dictionary; repeat the process and choose the -th term by orthogonalizing all remained expanded terms with the 1 selected terms and calculating the associated value, and the term with the largest value is selected. 4) Determine the number of model terms using the APRESS statistic given in (17). 5) Approximate the coefficients of the selected model terms, and estimate the initial time-varying parameters using formula (5), hence the essential TVARX models for TF-CGC decomposition can now be established.…”
Section: The Formulation Of Tf-cgc Analysismentioning
confidence: 99%
“…Many approaches have been proposed to identify TV-NARX models, which can be broadly classified into three categories: multi-model approach [6], adaptive estimation algorithm [7], and basis function expansion method [8,9]. In the first strategy, a global system model is divided into a set of local models by a time shifting window, then the local model can be treated as a stationary process and identified by a time-invariant modeling approach [10].…”
Section: Introductionmentioning
confidence: 99%
“…Time-varying auto-regressive moving average (TV-ARMA) model with basis function expansion has been developed for time-varying systems identification and the associated TFSE [22][23][24][25][26]. A time domain model is firstly identified and the timefrequency spectral is indirectly estimated by transforming the time domain model to alleviate the time-frequency dilemma Basis function expansion time-varying (nonlinear) autoregressive with exogenous input (TV-(N)ARX) model combined with orthogonal forward regression algorithms have been proved to be powerful in describing complex nonstationary processes [27][28][29]. Recently, Guo et al showed that asymmetric basis function TV-NARX inspired by neuronal dynamics can significantly improve model's ability in tracking both smooth trends and abrupt changes and improve the model sparsity [30].…”
Section: Introductionmentioning
confidence: 99%