Significance Choosing a statistical model and accounting for uncertainty about this choice are important parts of the scientific process and are required for common statistical tasks such as parameter estimation, interval estimation, statistical inference, point prediction, and interval prediction. A canonical example is the choice of variables in a linear regression model. Many ways of doing this have been proposed, including Bayesian and penalized regression methods, and it is not clear which are best. We compare 21 popular methods via an extensive simulation study based on a wide range of real datasets. We found that three adaptive Bayesian model averaging methods performed best across all the statistical tasks and that two of these were also among the most computationally efficient.
Studies fnd that older adults want control over how technologies are used in their care, but how it can be operationalized through design remains to be clarifed. We present fndings from a large survey (n=825) of a well-characterized U.S. online cohort that provides actionable evidence of the importance of designing for control over monitoring technologies. This uniquely large, age-diverse sample allows us to compare needs across age and other characteristics with insights about future users and current older adults (n=496 >64), including those concerned about their own memory loss (n=201). All fve control options, which are not currently enabled, were very or extremely important to most people across age. Findings indicate that comfort with a range of care technologies is contingent on having privacy-and other control-enabling options. We discuss opportunities for design to meet these user needs that demand course correction through attentive, creative work.
With the growing commonality of multi-omics datasets, there is now increasing evidence that integrated omics profiles lead to the more efficient discovery of clinically actionable biomarkers that enable better disease outcome prediction and patient stratification. Several methods exist to perform host phenotype prediction from cross-sectional, single-omics data modalities but decentralized frameworks that jointly analyze multiple time-dependent omics data to highlight the integrative and dynamic impact of repeatedly measured biomarkers are currently limited. In this article, we propose a novel Bayesian ensemble method to consolidate prediction by borrowing information across several longitudinal and cross-sectional omics data layers. Unlike existing frequentist paradigms, our approach enables uncertainty quantification in prediction as well as interval estimation for a variety of quantities of interest based on posterior summaries. We apply our method to four published multi-omics datasets and demonstrate that it recapitulates known biology in addition to providing novel insights while also outperforming existing methods in estimation, prediction, and uncertainty quantification. Our open-source software is publicly available at https://github.com/himelmallick/IntegratedLearner
Bayesian model averaging (BMA) provides a coherent way to account for model uncertainty in statistical inference tasks. BMA requires specification of model space priors and parameter space priors. In this article we focus on comparing different model space priors in the presence of model uncertainty. We consider eight reference model space priors used in the literature and three adaptive parameter priors recommended by Porwal and Raftery [37]. We assess the performance of these combinations of prior specifications for variable selection in linear regression models for the statistical tasks of parameter estimation, interval estimation, inference, point and interval prediction. We carry out an extensive simulation study based on 14 real datasets representing a range of situations encountered in practice. We found that beta-binomial model space priors specified in terms of the prior probability of model size performed best on average across various statistical tasks and datasets, outperforming priors that were uniform across models. Recently proposed complexity priors performed relatively poorly.
Power-expected-posterior (PEP) methodology, which borrows ideas from the literature on power priors, expected-posterior priors and unit information priors, provides a systematic way to construct objective priors. The basic idea is to use imaginary training samples to update a noninformative prior into a minimally-informative prior.In this work, we develop a novel definition of PEP priors for generalized linear models that relies on a Laplace expansion of the likelihood of the imaginary training sample.This approach has various computational, practical and theoretical advantages over previous proposals for non-informative priors for generalized linear models. We place a special emphasis on logistic regression models, where sample separation presents particular challenges to alternative methodologies. We investigate both asymptotic and finite-sample properties of the procedures, showing that is both asymptotic and intrinsic consistent, and that its performance is at least competitive and, in some settings, superior to that of alternative approaches in the literature.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.