This paper develops new methodology, together with related theories, for combining information from independent studies through confidence distributions. A formal definition of a confidence distribution and its asymptotic counterpart (i.e., asymptotic confidence distribution) are given and illustrated in the context of combining information. Two general combination methods are developed: the first along the lines of combining p-values, with some notable differences in regard to optimality of Bahadur type efficiency; the second by multiplying and normalizing confidence densities. The latter approach is inspired by the common approach of multiplying likelihood functions for combining parametric information. The paper also develops adaptive combining methods, with supporting asymptotic theory which should be of practical interest. The key point of the adaptive development is that the methods attempt to combine only the correct information, downweighting or excluding studies containing little or wrong information about the true parameter of interest. The combination methodologies are illustrated in simulated and real data examples with a variety of applications.
The notion of confidence distribution (CD), an entirely frequentist concept, is in essence a Neymanian interpretation of Fisher's Fiducial distribution. It contains information related to every kind of frequentist inference. In this article, a CD is viewed as a distribution estimator of a parameter. This leads naturally to consideration of the information contained in CD, comparison of CDs and optimal CDs, and connection of the CD concept to the (profile) likelihood function. A formal development of a multiparameter CD is also presented.
Let y=A\beta+\epsilon, where y is an N\times1 vector of observations, \beta
is a p\times1 vector of unknown regression coefficients, A is an N\times p
design matrix and \epsilon is a spherically symmetric error term with unknown
scale parameter \sigma. We consider estimation of \beta under general quadratic
loss functions, and, in particular, extend the work of Strawderman [J. Amer.
Statist. Assoc. 73 (1978) 623-627] and Casella [Ann. Statist. 8 (1980)
1036-1056, J. Amer. Statist. Assoc. 80 (1985) 753-758] by finding adaptive
minimax estimators (which are, under the normality assumption, also generalized
Bayes) of \beta, which have greater numerical stability (i.e., smaller
condition number) than the usual least squares estimator. In particular, we
give a subclass of such estimators which, surprisingly, has a very simple form.
We also show that under certain conditions the generalized Bayes minimax
estimators in the normal case are also generalized Bayes and minimax in the
general case of spherically symmetric errors.Comment: Published at http://dx.doi.org/10.1214/009053605000000327 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.