2022
DOI: 10.1111/insr.12511
|View full text |Cite
|
Sign up to set email alerts
|

Scalable Bayesian Multiple Changepoint Detection via Auxiliary Uniformisation

Abstract: Summary In this paper, we perform a sparse filtering recursion for efficient changepoint detection for discrete‐time observations. We attach auxiliary event times to the chronologically ordered observations and formulate multiple changepoint problems of discrete‐time observations into continuous‐time observations. Ideally, both the computational and memory costs of the proposed auxiliary uniformisation forward‐filtering backward‐sampling algorithm can be quadratically scaled down to the number of changepoints … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(9 citation statements)
references
References 39 publications
0
9
0
Order By: Relevance
“…When there exists exactly one jump, logp()boldXfalse(tk+1false)=i+1,boldYfalse(tk,tk+1false]true|boldXfalse(tkfalse)=i$$ \log p\left(\mathbf{X}\left({t}_{k+1}\right)=i+1,\mathbf{Y}\Big({t}_k,{t}_{k+1}\Big]|\mathbf{X}\left({t}_k\right)=i\right) $$ is maximized exactly at tk$$ {t}_k $$ or tk+1$$ {t}_{k+1} $$. Therefore, the MAP estimate of the set of changepoints is equivalently obtainable from the Viterbi algorithm in discrete time, see Proposition 1 in Lu (2022). Nevertheless, both the computational cost and the memory cost are quadratic to n$$ n $$ in this case, which is prohibitive for a long sequence of observations.…”
Section: Multiple Changepoint Detection With Missing Datamentioning
confidence: 98%
See 3 more Smart Citations
“…When there exists exactly one jump, logp()boldXfalse(tk+1false)=i+1,boldYfalse(tk,tk+1false]true|boldXfalse(tkfalse)=i$$ \log p\left(\mathbf{X}\left({t}_{k+1}\right)=i+1,\mathbf{Y}\Big({t}_k,{t}_{k+1}\Big]|\mathbf{X}\left({t}_k\right)=i\right) $$ is maximized exactly at tk$$ {t}_k $$ or tk+1$$ {t}_{k+1} $$. Therefore, the MAP estimate of the set of changepoints is equivalently obtainable from the Viterbi algorithm in discrete time, see Proposition 1 in Lu (2022). Nevertheless, both the computational cost and the memory cost are quadratic to n$$ n $$ in this case, which is prohibitive for a long sequence of observations.…”
Section: Multiple Changepoint Detection With Missing Datamentioning
confidence: 98%
“…In this case, both the computational and memory costs can be scaled down quadratically to the number of changepoints, see Lu (2021). The model formulation allows the multiple changepoint detection for both continuous‐time observations and discrete‐time ones in a unified approach (Lu, 2022). Assume m$$ m $$ changepoints appear at unknown locations 0τ0<τ1<<τm<τm+1T$$ 0\triangleq {\tau}_0<{\tau}_1<\cdots <{\tau}_m<{\tau}_{m+1}\triangleq T $$, such that normaly1:n$$ {\mathrm{y}}_{1:n} $$ is partitioned into m+1$$ m+1 $$ segments by m$$ m $$ changepoints bold-italicτ=false(τ1,,τmfalse)$$ \boldsymbol{\tau} =\left({\tau}_1,\dots, {\tau}_m\right) $$.…”
Section: Multiple Changepoint Detection With Missing Datamentioning
confidence: 99%
See 2 more Smart Citations
“…Thus, potential model bias by assuming all q i are equal is avoided. Because the posterior of the number of change points and their locations is sensitive to the specification of the change point recurrence rate (Lu 2023) and because a subjective selection or a rough estimate of these hyperparameters is suboptimal, we assume that all q i s are unknown and that a conjugate prior Γ(a, b) is used. These hyperparameters are also estimated via an empirical Bayesian approach; see Du et al (2016).…”
Section: Model Formulationmentioning
confidence: 99%