2008
DOI: 10.1111/j.1467-9892.2008.00574.x
|View full text |Cite
|
Sign up to set email alerts
|

Estimation of Parameters in the NLAR(p) Model

Abstract: In this article, we study a new Laplace autoregressive model of order p- NLAR(p). Conditional least squares, weighted conditional least squares and maximum quasi-likelihood are used to estimate the model parameters. Comparisons among these estimates of the NLAR(2) model are given via simulation studies. Copyright 2008 The Authors. Journal compilation 2008 Blackwell Publishing Ltd

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2010
2010
2024
2024

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 8 publications
0
5
0
Order By: Relevance
“…For the FAMVIOL data, which is underdispersed with small-count data, the EB-INAR(1) with m > 2 is also competitive. Based on the proof of the Theorem in [25], q 1t (θ 1 ) corresponds to u t and q 2t (θ 2 ) corresponds to U t in [25]. Based on the result of V 1 , W 1 in Proposition 2 and V 2 , W 2 in Proposition 3, we can obtain M = E q 11 (θ 1,0 )q 21 (θ 2,0 ) ∂ ∂θ 1…”
Section: Underdispersed Casementioning
confidence: 98%
See 1 more Smart Citation
“…For the FAMVIOL data, which is underdispersed with small-count data, the EB-INAR(1) with m > 2 is also competitive. Based on the proof of the Theorem in [25], q 1t (θ 1 ) corresponds to u t and q 2t (θ 2 ) corresponds to U t in [25]. Based on the result of V 1 , W 1 in Proposition 2 and V 2 , W 2 in Proposition 3, we can obtain M = E q 11 (θ 1,0 )q 21 (θ 2,0 ) ∂ ∂θ 1…”
Section: Underdispersed Casementioning
confidence: 98%
“…Based on the proof of the Theorem in [ 25 ], corresponds to and corresponds to in [ 25 ]. Based on the result of in Proposition 2 and in Proposition 3, we can obtain .…”
Section: Proof Of Propositionmentioning
confidence: 99%
“…To conduct the randomness test for the thinning parameter ϕ t , we also need to estimate the parameter θ = (σ 2 1 , σ 2 2 ) T and establish the asymptotic behaviors of the estimators in the second step. For this purpose, we refer to Nicholls and Quinn [27] and Hwang and Basaea [31], as well as Zhu and Wang [32], and adopt the two-step least squares method that has been widely used for time series models with random coefficients. Denote…”
Section: Parameter Estimation and Asymptotic Properties Of The Estima...mentioning
confidence: 99%
“…The CLS method consists in minimizing the following function: CLS(Ψ)falsefalset=p+1nXtE(Xt|Ft1)2. Since the above function does not depend on ϕ , we need an additional equation to estimate it. To do this, we consider a two‐step procedure introduced by Karlsen and Tjøstheim () and further generalized by Zhu and Wang () to the NLAR( p ) (new p ‐order Laplace autoregressive) model. Thus we take the estimator for the dispersion parameter as the argument that minimizes the function CLS(ϕ)=falsefalset=p+1nXt2E˜Xt2|Ft12, where trueE˜()Xt20.1emfalse|0.1emscriptFt1=falseμ˜t+falseμ˜t2false(1+ϕ1bfalse(ξ0false)false) and falseμ˜t is the estimate of μ t based on the CLS estimators truebold-italicβ˜ and truebold-italicα˜<...>…”
Section: Em‐based Likelihood Inferencementioning
confidence: 99%
“…Since the above function does not depend on , we need an additional equation to estimate it. To do this, we consider a two-step procedure introduced by Karlsen and Tjøstheim (1988) and further generalized by Zhu and Wang (2008) to the NLAR(p) (new p-order Laplace autoregressive) model. Thus we take the estimator for the dispersion parameter as the argument that minimizes the function…”
mentioning
confidence: 99%