2005 ICSC Congress on Computational Intelligence Methods and Applications
DOI: 10.1109/cima.2005.1662352
|View full text |Cite
|
Sign up to set email alerts
|

Minimum Message Length Moving Average Time Series Data Mining

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
10
0

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(10 citation statements)
references
References 27 publications
0
10
0
Order By: Relevance
“…In MML, there is (Bayesian) prior knowledge (or a prior distribution), π, over the parameter space. Following Wallace and Freeman [9], MML has been shown to work well in time series models, such as autoregressive (AR) and moving average (MA) models [18,27,28]. We can thus estimate the parameters [7,9] by minimizing the message length:…”
Section: Minimum Message Lengthmentioning
confidence: 99%
See 1 more Smart Citation
“…In MML, there is (Bayesian) prior knowledge (or a prior distribution), π, over the parameter space. Following Wallace and Freeman [9], MML has been shown to work well in time series models, such as autoregressive (AR) and moving average (MA) models [18,27,28]. We can thus estimate the parameters [7,9] by minimizing the message length:…”
Section: Minimum Message Lengthmentioning
confidence: 99%
“…In this paper, we select the best ARMA( p , q ) model and then train the LSTM model for the residuals through the ARMA model. The time-step order used in LSTM is the parameter q in ARMA( p , q ) determined by different information-theoretic criteria [ 18 , 19 ].…”
Section: Introductionmentioning
confidence: 99%
“…In MML, there is prior knowledge of π over the parameter space so the MML is part of the Bayesian approach. Following Wallace and Freeman [15], MML87 is extended version of MML, it has been shown to work well in time series model such as autoregressive model (AR) and moving average model (MA) [12,16]. Assuming the ARMA model parameters are given by β = (φ 1 , ..., φ p , θ 1 , ..., θ q , σ 2 , following MML87, we seek the parameter values in order to minimize the message length is:…”
Section: Minimum Message Lengthmentioning
confidence: 99%
“…We use the information-theoretic Minimum Message Length (MML), AIC and BIC to select the model orders of ARMA(p, q), and we then train the LSTM model by the residuals left from the ARMA model [12,13]. Most other papers using this kind of hybrid model use information-theoretic model selection techniques such as AIC and BIC.…”
Section: Introductionmentioning
confidence: 99%
“…In MML, there is prior knowledge of π over the parameter space so the MML is part of the Bayesian approach. Following Wallace and Freeman [15], MML87 is extended version of MML, it has been shown to work well in time series model such as autoregressive model (AR) and moving average model (MA) [12,16]. Assuming the ARMA model parameters are given by β = (φ 1 , ..., φ p , θ 1 , ..., θ q , σ 2 ) following MML87, we seek the parameter values in order to minimize the message length given by:…”
mentioning
confidence: 99%