2004
DOI: 10.1175/1520-0493(2004)132<1590:wibaeo>2.0.co;2
|View full text |Cite
|
Sign up to set email alerts
|

Which Is Better, an Ensemble of Positive–Negative Pairs or a Centered Spherical Simplex Ensemble?

Abstract: New methods to center the initial ensemble perturbations on the analysis are introduced and compared with the commonly used centering method of positive-negative paired perturbations. In the new method, one linearly dependent perturbation is added to a set of linearly independent initial perturbations to ensure that the sum of the new initial perturbations equals zero; the covariance calculated from the new initial perturbations is equal to the analysis error covariance estimated by the independent initial per… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
246
0
1

Year Published

2007
2007
2016
2016

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 201 publications
(249 citation statements)
references
References 19 publications
2
246
0
1
Order By: Relevance
“…The use of the symmetric square root to determine W a fromP a (as compared to, for example, a Cholesky factorization, or the choice described in [4]), is important for two main reasons. First, as we will see below, it ensures that the sum of the columns of X a is zero, so that the analysis ensemble has the correct sample mean (this is also shown for the symmetric square root in [44]). Second, it ensures that W a depends continuously oñ P a ; while this may be a desirable property in general, it is crucial in a local analysis scheme, so that neighboring grid points with slightly different matricesP a do not yield very different analysis ensembles.…”
Section: Letkf: a Local Ensemble Transform Kalman Filtermentioning
confidence: 91%
“…The use of the symmetric square root to determine W a fromP a (as compared to, for example, a Cholesky factorization, or the choice described in [4]), is important for two main reasons. First, as we will see below, it ensures that the sum of the columns of X a is zero, so that the analysis ensemble has the correct sample mean (this is also shown for the symmetric square root in [44]). Second, it ensures that W a depends continuously oñ P a ; while this may be a desirable property in general, it is crucial in a local analysis scheme, so that neighboring grid points with slightly different matricesP a do not yield very different analysis ensembles.…”
Section: Letkf: a Local Ensemble Transform Kalman Filtermentioning
confidence: 91%
“…Following Wang et al (2004), the unique symmetric square root should be chosen to compute the transform matrix in Eq. (4).…”
Section: A Linear Filtering and The Etkfmentioning
confidence: 99%
“…In particular we use the ETKF (Bishop et al, 2001;Tippett et al, 2003;Wang et al, 2004), which seeks a transformation S ∈ R k×k such that the analysis deviation ensemble Z a is given as a deterministic perturbation of the forecast ensemble Z f via Z a = Z f S. Details of the implementation can be found in Bishop et al (2001), Tippett et al (2003) and Wang et al (2004). Alternatively one could choose the ensemble adjustment filter (Anderson, 2001) in which the ensemble deviation matrix Z f is pre-multiplied with an appropriately determined matrix A ∈ R N×N .…”
Section: The Variance-limiting Kalman Filtermentioning
confidence: 99%
“…In a perfect model data assimilation scheme, unrealistic overestimation of the error covariances occurs in sparse observational networks as a finite size effect; it was shown that, for short observational intervals of up to 10 h, the VLKF produces superior skill when compared to the standard ensemble transform Kalman filter (ETKF) (Bishop et al, 2001;Tippett et al, 2003;Wang et al, 2004). Here we will apply the VLKF in the case of an imperfect, underdamped forecast model, and for large observation intervals.…”
Section: Introductionmentioning
confidence: 99%