2012
DOI: 10.1016/j.csda.2012.02.027
|View full text |Cite
|
Sign up to set email alerts
|

Estimating discrete Markov models from various incomplete data schemes

Abstract: The parameters of a discrete stationary Markov model are transition probabilities between states. Traditionally, data consist in sequences of observed states for a given number of individuals over the whole observation period. In such a case, the estimation of transition probabilities is straightforwardly made by counting one-step moves from a given state to another. In many real-life problems, however, the inference is much more difficult as state sequences are not fully observed, namely the state of each ind… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
14
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(14 citation statements)
references
References 64 publications
0
14
0
Order By: Relevance
“…We provide in the following a brief presentation of Gibbs sampling. Additional implementation details are in Appendix, and we refer to Robert and Casella (2004) for the general principles underlying MCMC algorithms and to Pasanisi et al (2012) for an extended description of Gibbs sampling to infer transition probabilities in temporal sequences. In addition to the full explanation below, we also provide a pseudocode of the procedure (Box 1).…”
Section: Step 2: Gibbs Sampling Methodologymentioning
confidence: 99%
See 2 more Smart Citations
“…We provide in the following a brief presentation of Gibbs sampling. Additional implementation details are in Appendix, and we refer to Robert and Casella (2004) for the general principles underlying MCMC algorithms and to Pasanisi et al (2012) for an extended description of Gibbs sampling to infer transition probabilities in temporal sequences. In addition to the full explanation below, we also provide a pseudocode of the procedure (Box 1).…”
Section: Step 2: Gibbs Sampling Methodologymentioning
confidence: 99%
“…with Dir is the Dirichlet distribution, g are biasing factors set here uniformly to 1 as we include no prior knowledge on the shape of the transition matrix (Pasanisi et al, 2012). w i,j are the sufficient statistics reflecting the transitions in the augmented data Z [hÀ1] , formally defined as…”
Section: Step 2: Gibbs Sampling Methodologymentioning
confidence: 99%
See 1 more Smart Citation
“…We provide in the following a brief presentation of Gibbs sampling. Additional 237 implementation details are in Appendix 1.2, and we refer to Robert and Casella (2004) for 238 the general principles underlying MCMC algorithms and to Pasanisi et al (2012) for an ex-239 tended description of Gibbs sampling to infer transition probabilities in temporal sequences.…”
mentioning
confidence: 99%
“…To overcome this difficulty, we employed here the original methodology developed in Lienard et al (2014). Specifically, we applied Gibbs sampling, a Monte Carlo Markov Chain implementation (Robert and Casella, 2004) to estimate the transition probabilities, with the specific guidelines provided by Pasanisi et al (2012). We provide in the following a brief description of the algorithm; please refer to Lienard et al (2014) for an extended description. First, we constructed a temporal sequence S p of the shade tolerance index for each plot p , by inserting the discretized value of shade tolerance index s ( p,i ) measured in the i -th year, at position i of S p .…”
Section: Statistical Analyzes Of the Fia Databasementioning
confidence: 99%