2010
DOI: 10.4310/sii.2010.v3.n3.a9
|View full text |Cite
|
Sign up to set email alerts
|

Computation- and space-efficient implementation of SSA

Abstract: The computational complexity of different steps of the Basic SSA is discussed. It is shown that the use of the general-purpose "blackbox" routines which can be found in packages like LAPACK leads to a huge waste of time since the Hankel structure of the trajectory matrix is not taken into account.We outline several state-of-the-art algorithms including the Lanczos-based truncated Singular Value Decomposition (SVD) which can be modified to exploit the structure of the trajectory matrix. The key components here … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
75
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 81 publications
(75 citation statements)
references
References 31 publications
0
75
0
Order By: Relevance
“…This is because the unchanged grouping transformation part dominates the computational cost in this stage due to p K  . According to an introductory computational complexity analysis of the SSA algorithm in [35], step-wise complexity of the techniques presented in terms of multiplicate-accumulates (MACs) is given in Table VIII for comparisons. The embedding stage only consists of relocating the elements from a vector array into a matrix, so no MACs are involved.…”
Section: E Computational Complexity For Ssa and F-ssamentioning
confidence: 99%
“…This is because the unchanged grouping transformation part dominates the computational cost in this stage due to p K  . According to an introductory computational complexity analysis of the SSA algorithm in [35], step-wise complexity of the techniques presented in terms of multiplicate-accumulates (MACs) is given in Table VIII for comparisons. The embedding stage only consists of relocating the elements from a vector array into a matrix, so no MACs are involved.…”
Section: E Computational Complexity For Ssa and F-ssamentioning
confidence: 99%
“…The Hankel structure of H implies that applying this matrix to a vector is equivalent to computing the convolution of the data series with this vector. Thanks to the properties of the fast digital Fourier transform (FFT), fast Hankel matrix product algorithms can be designed that perform this operation much more rapidly and with a much smaller memory footprint than direct multiplication (13,29). This approach presents a processing cost proportional to OðL logðLÞÞ rather than OðMNÞ.…”
Section: Methodsmentioning
confidence: 99%
“…Fig. 2 [19]), while the other two models performed using the forecast package [20]. For each approach, the following performance indexes related to the forecasting errors are offered: MAE, MSE, RMSE and MAPE, and residuals test statistics.…”
Section: Application To Real Datamentioning
confidence: 99%