An honest communication of uncertainty about quantities of interest enhances transparency in scientific assessments. To support this communication, risk assessors should choose appropriate ways to evaluate and characterize epistemic uncertainty. A full treatment of uncertainty requires methods that distinguish aleatory from epistemic uncertainty. Quantitative expressions for epistemic uncertainty are advantageous in scientific assessments because they are nonambiguous and enable individual uncertainties to be characterized and combined in a systematic way. Since 2019, the European Food Safety Authority (EFSA) recommends assessors to express epistemic uncertainty in conclusions of scientific assessments quantitatively by subjective probability. A subjective probability can be used to represent an expert judgment, which may or may not be updated using Bayes's rule to integrate evidence available for the assessment and could be either precise or approximate. Approximate (or bounded) probabilities may be enough for decision making and allow experts to reach agreement on certainty when they struggle to specify precise subjective probabilities. The difference between the lower and upper bound on a subjective probability can also be used to reflect someone's strength of knowledge. In this article, we demonstrate how to quantify uncertainty by bounded probability, and explicitly distinguish between epistemic and aleatory uncertainty, by means of robust Bayesian analysis, including standard Bayesian analysis through precise probability as a special case. For illustration, the two analyses are applied to an intake assessment.
Meta‐analysis is a statistical method used in evidence synthesis for combining, analyzing and summarizing studies that have the same target endpoint and aims to derive a pooled quantitative estimate using fixed and random effects models or network models. Differences among included studies depend on variations in target populations (ie, heterogeneity) and variations in study quality due to study design and execution (ie, bias). The risk of bias is usually assessed qualitatively using critical appraisal, and quantitative bias analysis can be used to evaluate the influence of bias on the quantity of interest. We propose a way to consider ignorance or ambiguity in how to quantify bias terms in a bias analysis by characterizing bias with imprecision (as bounds on probability) and use robust Bayesian analysis to estimate the overall effect. Robust Bayesian analysis is here seen as Bayesian updating performed over a set of coherent probability distributions, where the set emerges from a set of bias terms. We show how the set of bias terms can be specified based on judgments on the relative magnitude of biases (ie, low, unclear, and high risk of bias) in one or several domains of the Cochrane's risk of bias table. For illustration, we apply a robust Bayesian bias‐adjusted random effects model to an already published meta‐analysis on the effect of Rituximab for rheumatoid arthritis from the Cochrane Database of Systematic Reviews.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.