2016
DOI: 10.1007/s10614-016-9617-9
|View full text |Cite
|
Sign up to set email alerts
|

A Practical, Accurate, Information Criterion for Nth Order Markov Processes

Abstract: The recent increase in the breath of computational methodologies has been matched with a corresponding increase in the difficulty of comparing the relative explanatory power of models from different methodological lineages. In order to help address this problem a Markovian information criterion (MIC) is developed that is analogous to the Akaike information criterion (AIC) in its theoretical derivation and yet can be applied to any model able to generate simulated or predicted data, regardless of its methodolog… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

1
61
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 48 publications
(62 citation statements)
references
References 45 publications
1
61
0
Order By: Relevance
“…In many cases, this amounts at replicating the largest possible number of stylized facts characterizing the phenomenon of interest (see Dosi et al, 2010Dosi et al, , 2013Dosi et al, , 2015 for business cycle properties, credit and interbank markets or Pellizzari and Forno, 2006;Jacob Leal et al, 2015 for financial markets). Recent attempts are trying to enrich empirical validation going beyond simple replication of empirical regularities, thereby requesting models to generate series that exhibit the same dynamics (Marks, 2013;Lamperti, 2015), conditional probabilistic structure (Barde, 2015) and causal relations (Guerini and Moneta, 2016) as those observed in the real world data. At least partially, such contributions have been motivated by the unsatisfactory results delivered by calibration.…”
Section: Introductionmentioning
confidence: 99%
“…In many cases, this amounts at replicating the largest possible number of stylized facts characterizing the phenomenon of interest (see Dosi et al, 2010Dosi et al, , 2013Dosi et al, , 2015 for business cycle properties, credit and interbank markets or Pellizzari and Forno, 2006;Jacob Leal et al, 2015 for financial markets). Recent attempts are trying to enrich empirical validation going beyond simple replication of empirical regularities, thereby requesting models to generate series that exhibit the same dynamics (Marks, 2013;Lamperti, 2015), conditional probabilistic structure (Barde, 2015) and causal relations (Guerini and Moneta, 2016) as those observed in the real world data. At least partially, such contributions have been motivated by the unsatisfactory results delivered by calibration.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, Kukacka and Barunik (2016) have proposed non-parametric simulated maximum likelihood, and apply it to the estimation of the model described in Brock and Hommes (1998). Other contributions have introduced sophisticated validation methods aimed at measuring the distance between real and simulated data, where the simulated data can either come from the original AB model (Fabretti, 2013;Marks, 2013;Lamperti, 2015;Recchioni et al, 2015;Barde, 2016;Guerini and Moneta, 2016), or from a surrogate model (Salle and Yildizoglu, 2014;Sani et al, 2016). Minimisation of such distance metrics within a SMD approach leads to alternative estimators with respect to those commonly employed in the literature, such as MSM or II, although the properties of these estimators, including consistency, and their associated uncertainty have yet to be fully assessed.…”
mentioning
confidence: 99%
“…The Markov Information Criterion (MIC) is a recent model comparison methodology developed in Barde (2016a) that provides a measurement of the cross entropy between a model and an empirical data set for any model reducible to a Markov process of arbitrary order. In an analogous manner to a traditional information criterion (AIC, BIC, etc.…”
Section: Markov Information Criterion and Model Confidence Setmentioning
confidence: 99%
“…The CTW algorithm is proven to provide an optimal learning performance, in the sense that it achieves the theoretical lower bound on the learning error. As explained in Barde (2016a), this means that a bound correction procedure can be applied to the raw CTW score to correct the measurement error due to learning, thus enabling an accurate measurement of the informational distance between the model and the data. By considering the model as an input/output response function, i.e.…”
mentioning
confidence: 99%