2020
DOI: 10.1007/s42952-020-00081-6
|View full text |Cite
|
Sign up to set email alerts
|

Short communication: Detecting possibly frequent change-points: wild binary segmentation 2 and steepest-drop model selection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
1

Relationship

3
2

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 12 publications
0
5
0
Order By: Relevance
“…This is in contrast to Fryzlewicz (2018), whose methodology estimates at least 10 changepoints, and Baranowski et al (2019) who find five changepoints in the series. We believe that the most likely explanation for this is the presence of significant autocorrelation within the series, which may cause the methods to overfit the number of changepoints, as noted in Lund and Shi (2020). To illustrate this point, in Figure 5, we plot the estimated autocorrelation for the mean removed Newham series, at lags 1, 2, and 3.…”
Section: Data Applicationsmentioning
confidence: 99%
See 1 more Smart Citation
“…This is in contrast to Fryzlewicz (2018), whose methodology estimates at least 10 changepoints, and Baranowski et al (2019) who find five changepoints in the series. We believe that the most likely explanation for this is the presence of significant autocorrelation within the series, which may cause the methods to overfit the number of changepoints, as noted in Lund and Shi (2020). To illustrate this point, in Figure 5, we plot the estimated autocorrelation for the mean removed Newham series, at lags 1, 2, and 3.…”
Section: Data Applicationsmentioning
confidence: 99%
“…One approach for dealing with autocorrelation when testing for changes in mean is to increase the threshold above which changes are detected; see Lavielle (1999), for example. However, it is difficult to systematically choose the threshold without performing some sort of preestimation of the autocovariance, which is highly challenging in the presence of mean changes (Lund & Shi, 2020). If one simply raises the threshold, then there will be a trade‐off between a decreased false positive rate and a decreased true positive rate.…”
Section: Introductionmentioning
confidence: 99%
“…As previously discussed, most changepoint techniques mistakenly flag changepoints when underlying positive dependence is ignored. For example, [26] argues that shifts identified in the London house price series of [27] may be more attributable to the positive correlations in the series than to actual mean shifts. CUSUM based techniques are known to degrade with positive correlation [12].…”
Section: Changepoints In Ar(p) Seriesmentioning
confidence: 99%
“…Any ARMA(p, q) error model can be fitted to the centered series. A genetic algorithm (GA) can be used to estimate the optimal penalized likelihood over all changepoint configurations under a variety of penalization schemes [Lund and Shi, 2020]. Unfortunately, the GA may take significant computing time to find the optimum: there are 2 N −1 different multiple changepoint configurations in a series of length N to search over.…”
Section: Model and Estimation Approachesmentioning
confidence: 99%
“…6.1 Changepoints in AR(p Series Some techniques can mistakenly flag changepoints when underlying positive dependence is ignored. For example, Lund and Shi [2020] argues that the London house price series shifts identified may be more attributable to the positive correlations in the series than to actual mean shifts. CUSUM based techniques are severely affected by positive correlation Shi et al [2021].…”
Section: Applicationsmentioning
confidence: 99%