2013
DOI: 10.1007/978-1-4614-6312-2_6
|View full text |Cite
|
Sign up to set email alerts
|

Higher-Order Markov Chains

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 27 publications
(22 citation statements)
references
References 47 publications
0
22
0
Order By: Relevance
“…Before starting the Monte Carlo procedures, we set X to the averaged value of actual signals for a given word. 6) From the simulated signals n X , ACVFs are calculated using Equation (1) and compared with those obtained in step As Equation 9shows, signals t X are considered to be consequences of the time-varying conditional probabilities for word occurrence given by Equation (9) in the framework of additive binary Markov chain theory. In this sense, the unobserved time-varying probabilities given by Equation 9 We further calculated the time-varying probabilities of n X being 1 for typical Type-I words selected from five academic books.…”
Section: ( )mentioning
confidence: 99%
See 1 more Smart Citation
“…Before starting the Monte Carlo procedures, we set X to the averaged value of actual signals for a given word. 6) From the simulated signals n X , ACVFs are calculated using Equation (1) and compared with those obtained in step As Equation 9shows, signals t X are considered to be consequences of the time-varying conditional probabilities for word occurrence given by Equation (9) in the framework of additive binary Markov chain theory. In this sense, the unobserved time-varying probabilities given by Equation 9 We further calculated the time-varying probabilities of n X being 1 for typical Type-I words selected from five academic books.…”
Section: ( )mentioning
confidence: 99%
“…As that figure shows, the two features described above are common among all cases, indicating that the two features are substantial for all Type-I words having Figure 7. Conditional probabilities of X n being 1 calculated using Equation (9). The procedures for obtaining these conditional probabilities are the same as those for Figure 6.…”
Section: ( )mentioning
confidence: 99%
“…beyond the only two training dates allowed by the current software packages. In this sense, the integration of higherorder Markov chains (Ching et al 2013) into LUCC modelling tools could be a potential path to consider, because the successive adjustments of the Markovian matrices proposed here cannot constitute a robust methodology applicable to all cases and all types of thematic data.…”
Section: Discussionmentioning
confidence: 99%
“…A typical practice is to independently train several Markov chains with the corresponding subsets of data. However, this approach usually requires a large training dataset to prevent over-fitting of the model [18]. The problem of over-fitting may become severe when a Markov chain is in a higher order, where the number of parameters increases exponentially with the order of Markov chain.…”
Section: Likelihood Distribution: Gaussian Process Markov Chainsmentioning
confidence: 99%
“…A common choice of available solutions is to assume Markov property [2][3][4][5][6], which asserts that the next state only depends on nearby states, and is conditionally independent of the previous states. For nowadays applications of Markov models, firstorder Markov property [18], which mentions that the probability distribution of future state is dependent only upon the present state, is the most frequently used, but is not likely satisfied in our case. Therefore, we relaxed the assumption to allow the future state depending on the past states.…”
Section: Markov Chains For Drawing Processesmentioning
confidence: 99%