2012
DOI: 10.1080/03610918.2012.625790
|View full text |Cite
|
Sign up to set email alerts
|

M-Procedures for Detection of Changes for Dependent Observations

Abstract: This article extends known results on M-procedures for detection of changes in a location model to the situation with dependent observations, particularly when the error terms fulfill -mixing conditions. Theoretical results are accompanied by a simulation study. The results can be extended to more general models, however the proofs become more cumbersome.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
11
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(12 citation statements)
references
References 16 publications
1
11
0
Order By: Relevance
“…We further set the penalty/threshold to be β = 2σ 2 log(n)E(φ(Z) 2 ), where φ is the gradient of the loss function and Z is a standard Gaussian random variable. This is based on the Schwarz information criteria, adapted to account for the variability of loss function that is used (see, e.g., theoretical results in Hušková and Marušiaková, 2012, for further justification of this), and for the biweight loss this is inline with Theorem 2.3, which suggested the use of a penalty that is proportional to log(n). We also compared to just using the standard square-error loss: implemented using FPOP (Maidstone et al, 2017); and to the WBS (Fryzlewicz, 2014) approach that uses a standard cusum test statistic for detecting changepoints.…”
Section: Simulation Study: Accuracymentioning
confidence: 99%
“…We further set the penalty/threshold to be β = 2σ 2 log(n)E(φ(Z) 2 ), where φ is the gradient of the loss function and Z is a standard Gaussian random variable. This is based on the Schwarz information criteria, adapted to account for the variability of loss function that is used (see, e.g., theoretical results in Hušková and Marušiaková, 2012, for further justification of this), and for the biweight loss this is inline with Theorem 2.3, which suggested the use of a penalty that is proportional to log(n). We also compared to just using the standard square-error loss: implemented using FPOP (Maidstone et al, 2017); and to the WBS (Fryzlewicz, 2014) approach that uses a standard cusum test statistic for detecting changepoints.…”
Section: Simulation Study: Accuracymentioning
confidence: 99%
“…The ordinary cumulative sums test seems to be inferior to the other tests for heavy-tailed t 3 -or skewed χ 2 3 -distributed innovations and not much better for Gaussian innovations. Additional simulations not reported here indicate that the advantage of the robust tests gets larger as the tails get heavier, see also Huskova & Marusiakova (2012) and Vogel & Wendler (2017).…”
Section: Lehmann Change-point Test Hereaftermentioning
confidence: 62%
“…Besides the cumulative sum test, we compare our test to further competitors also designed for shift detection in weakly dependent data. Extending work by de Jong & Davidson (2000), Huskova & Marusiakova (2012) suggest a version of the cumulative sum test based on the partial sums of M-residuals ψ{Y i −μ n (ψ)}, replacing the sign function used by the former authors by the Huber function ψ(x) = x1(|x| ≤ cκ n ) + cκ n 1(|x| > cκ n ), whereκ n is a robust estimate of the standard deviation of the observations. The Huber function comprises the sign and the identity function as limiting cases as the tuning constant c ∈ [0, ∞) approaches zero or infinity, respectively.…”
Section: Simulation Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…where ω > 0, α ≥ 0, and β ≥ 0. For mean estimates, we focus on procedures based on M-procedures of [19], where the parameter estimator θ t is a solution to the equation…”
Section: Methodsmentioning
confidence: 99%