This paper studies the quality of portfolio performance tests based on out-of-sample returns. By disentangling the components of out-of-sample performance we show that observed differences are driven to a large extent by the differences in estimation risk. Our Monte Carlo study reveals that the puzzling empirical findings of inferior performance of theoretically superior strategies mainly result from the low power of these tests. Thus our results provide an explanation why the null hypothesis of equal performance of the simple equally weighted portfolio compared to many alternatives, theoretically superior strategies cannot be rejected in many out-of-sample horse races. Our findings turn out to be robust with respect to different designs and the implementation strategies of the tests. For the applied researcher we provide some guidance to cope with the problem of low power. In particular, we show by the means of a novel pretest-based portfolio strategy, how the information of performance tests can be used optimally.
This paper exploits the idea of combining pretesting and bagging to choose between competing portfolio strategies. We propose an estimator for the portfolio weight vector, which optimally trades off Type I against Type II errors when choosing the best investment strategy. Furthermore, we accommodate the idea of bagging in the portfolio testing problem, which helps to avoid sharp thresholding and reduces turnover costs substantially.Our Bagged Pretested Portfolio Selection (BPPS) approach borrows from both the shrinkage and the forecast combination literature. The portfolio weights of our strategy are weighted averages of the portfolio weights from a set of stand-alone strategies. More specifically, the weights are generated from pseudo-out-of-sample portfolio pretesting, such that they reflect the probability that a given strategy will be overall best performing. The resulting strategy allows for a flexible and smooth A c c e p t e d M a n u s c r i p t switch between the underlying strategies and outperforms the corresponding standalone strategies. Besides yielding high point estimates of the portfolio performance measures, the BPPS approach performs exceptionally well in terms of precision and is robust against outliers resulting from the choice of the asset space.
Studying the relative weighting of different cues for the interpretation of a linguistic phenomenon is a core element in psycholinguistic research. This research needs to strike a balance between two things: generalisability to diverse lexical settings, which requires a high number of different lexicalisations and the investigation of a large number of different cues, which requires a high number of different test conditions. Optimizing both is impossible with classical psycholinguistic designs as this would leave the participants with too many experimental trials. Previously we showed that Active Learning (AL) systems allow to test numerous conditions (eight) and items (32) within the same experiment. As stimulus selection was informed by the system's learning mechanism, AL sped-up the labelling process. In the present study, we extend the use case to an experiment with 16 conditions, manipulated through four binary factors (the experimental setting and three prosodic cues; two levels each). Our findings show that the AL system correctly predicted the intended result pattern after twelve trials only. Hence, AL further confirmed previous findings and proved to be an efficient tool, which offers a promising solution to complex study designs in psycholinguistic research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.