We analyze the empirical power and specification of test statistics in event studies designed to detect long-run (one-to five-year) abnormal stock returns. We document that test statistics based on abnormal returns calculated using a reference portfolio, such as a market index, are misspecified (empirical rejection rates exceed theoretical rejection rates) and identify three reasons for this misspecification. We correct for the three identified sources of misspecification by matching sample firms to control firms of similar sizes and book-to-market ratios. This control firm approach yields well-specified test statistics in virtually all sampling situations considered.
We analyze tests for long-run abnormal returns and document that two approaches yield well-specified test statistics in random samples. The first uses a traditional event study framework and buy-and-hold abnormal returns calculated using carefully constructed reference portfolios. Inference is based on either a skewnessadjusted t-statistic or the empirically generated distribution of long-run abnormal returns. The second approach is based on calculation of mean monthly abnormal returns using calendar-time portfolios and a time-series t-statistic. Though both approaches perform well in random samples, misspecification in nonrandom samples is pervasive. Thus, analysis of long-run abnormal returns is treacherous. COMMONLY USED METHODS TO TEST for long-run abnormal stock returns yield misspecified test statistics, as documented by Lyon~1997a! and Warner~1997!. 1 Simulations reveal that empirical rejection levels routinely exceed theoretical rejection levels in these tests. In combination, these papers highlight three causes for this misspecification. First, the new listing or survivor bias arises because in event studies of long-run abnormal returns, sampled firms are tracked for a long post-event period, but firms that constitute the index~or reference portfolio! typically include firms that begin trading subsequent to the event month. Second, the rebalancing bias arises because the compound returns of a reference portfolio, such as an equally weighted market index, are typically calculated assuming periodic generally monthly! rebalancing, whereas the returns of sample firms are compounded without rebalancing. Third, the skewness bias arises because the distribution of long-run abnormal stock returns is positively skewed, 165 which also contributes to the misspecification of test statistics. Generally, the new listing bias creates a positive bias in test statistics, and the rebalancing and skewness biases create a negative bias.In this research, we evaluate two general approaches for tests of long-run abnormal stock returns that control for these three sources of bias. The first approach is based on a traditional event study framework and buy-and-hold abnormal returns. In this approach we first carefully construct reference portfolios that are free of the new listing and rebalancing biases. Consequently, these reference portfolios yield a population mean abnormal return measure that is identically zero and, therefore, reduce the misspecification of test statistics. Then we control for the skewness bias in tests of long-run abnormal returns by applying standard statistical methods recommended for settings when the underlying distribution is positively skewed. Two statistical methods virtually eliminate the skewness bias in random samples:~1! a bootstrapped version of a skewness-adjusted t-statistic, and~2! empirical p values calculated from the simulated distribution of mean long-run abnormal returns estimated from pseudoportfolios. The first method is developed and analyzed based on a rich history of research in st...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.