Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well-constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled Differential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, highdimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.
In this paper we introduce a new nonparametric test for Granger non-causality which avoids the over-rejection observed in the frequently used test proposed by Hiemstra and Jones [1994. Testing for linear and nonlinear Granger causality in the stock price-volume relation. Journal of Finance 49, 1639-1664]. After illustrating the problem by showing that rejection probabilities under the null hypothesis may tend to one as the sample size increases, we study the reason behind this phenomenon analytically. It turns out that the Hiemstra-Jones test for the null of Granger non-causality, which can be rephrased in terms of conditional independence of two vectors X and Z given a third vector Y, is sensitive to variations in the conditional distributions of X and Z that may be present under the null. To overcome this problem we replace the global test statistic by an average of local conditional dependence measures. By letting the bandwidth tend to zero at appropriate rates, the variations in the conditional distributions are accounted for automatically. Based on asymptotic theory we formulate practical guidelines for choosing the bandwidth depending on the sample size. We conclude with an application to historical returns and trading volumes of the Standard and Poor's index which indicates that the evidence for volume Grangercausing returns is weaker than suggested by the Hiemstra-Jones test. r
[1] Hydrologic models use relatively simple mathematical equations to conceptualize and aggregate the complex, spatially distributed, and highly interrelated water, energy, and vegetation processes in a watershed. A consequence of process aggregation is that the model parameters often do not represent directly measurable entities and must therefore be estimated using measurements of the system inputs and outputs. During this process, known as model calibration, the parameters are adjusted so that the behavior of the model approximates, as closely and consistently as possible, the observed response of the hydrologic system over some historical period of time. In practice, however, because of errors in the model structure and the input (forcing) and output data, this has proven to be difficult, leading to considerable uncertainty in the model predictions. This paper surveys the limitations of current model calibration methodologies, which treat the uncertainty in the input-output relationship as being primarily attributable to uncertainty in the parameters and presents a simultaneous optimization and data assimilation (SODA) method, which improves the treatment of uncertainty in hydrologic modeling. The usefulness and applicability of SODA is demonstrated by means of a pilot study using data from the Leaf River watershed in Mississippi and a simple hydrologic model with typical conceptual components.
The present study investigates the linear and nonlinear causal linkages between daily spot and futures prices for maturities of one, two, three and four months of West Texas Intermediate (WTI) crude oil. The data cover two periods October 1991-October 1999 and November 1999-October 2007, with the latter being significantly more turbulent. Apart from the conventional linear Granger test we apply a new nonparametric test for nonlinear causality by Diks and Panchenko after controlling for cointegration. In addition to the traditional pairwise analysis, we test for causality while correcting for the effects of the other variables. To check if any of the observed causality is strictly nonlinear in nature, we also examine the nonlinear causal relationships of VECM filtered residuals. Finally, we investigate the hypothesis of nonlinear non-causality after controlling for conditional heteroskedasticity in the data using a GARCH-BEKK model. Whilst the linear causal relationships disappear after VECM cointegration filtering, nonlinear causal linkages in some cases persist even after GARCH filtering in both periods. This indicates that spot and futures returns may exhibit asymmetries and statistically significant higherorder moments. Moreover, the results imply that if nonlinear effects are accounted for, neither market leads or lags the other consistently, videlicet the pattern of leads and lags changes over time.
a b s t r a c tWe propose new scoring rules based on conditional and censored likelihood for assessing the predictive accuracy of competing density forecasts over a specific region of interest, such as the left tail in financial risk management. These scoring rules can be interpreted in terms of Kullback-Leibler divergence between weighted versions of the density forecast and the true density. Existing scoring rules based on weighted likelihood favor density forecasts with more probability mass in the given region, rendering predictive accuracy tests biased toward such densities. Using our novel likelihood-based scoring rules avoids this problem.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.