Regulatory impact analyses (RIAs) weigh the benefits of regulations against the burdens they impose and are invaluable tools for informing decision makers. We offer 10 tips for nonspecialist policymakers and interested stakeholders who will be reading RIAs as consumers.1.Core problem: Determine whether the RIA identifies the core problem (compelling public need) the regulation is intended to address.2.Alternatives: Look for an objective, policy-neutral evaluation of the relative merits of reasonable alternatives.3.Baseline: Check whether the RIA presents a reasonable “counterfactual” against which benefits and costs are measured.4.Increments: Evaluate whether totals and averages obscure relevant distinctions and trade-offs.5.Uncertainty: Recognize that all estimates involve uncertainty, and ask what effect key assumptions, data, and models have on those estimates.6.Transparency: Look for transparency and objectivity of analytical inputs.7.Benefits: Examine how projected benefits relate to stated objectives.8.Costs: Understand what costs are included.9.Distribution: Consider how benefits and costs are distributed.10.Symmetrical treatment: Ensure that benefits and costs are presented symmetrically.
The emergence of behavioral public administration has led to increasing calls for public managers and policy makers to consider predictable cognitive biases when regulating individual behaviors or market transactions. Recognizing that cognitive biases can also affect the regulators themselves, this article attempts to understand how the institutional environment in which regulators operate interacts with their cognitive biases. In other words, to what extent does the "choice architecture" that regulators face reinforce or counteract predictable cognitive biases? Just as knowledge of behavioral insights can help regulators design a choice architecture that frames individual decisions to encourage welfare-enhancing choices, it may help governments understand and design institutions to counter cognitive biases in regulators that contribute to deviations from public interest policies. From these observations, the article offers some modest suggestions for improving the regulatory choice architecture.
Federal and other regulatory agencies often use or claim to use a weight of evidence (WoE) approach in chemical evaluation. Their approaches to the use of WoE, however, differ significantly, rely heavily on subjective professional judgment, and merit improvement. We review uses of WoE approaches in key articles in the peer-reviewed scientific literature, and find significant variations. We find that a hypothesis-based WoE approach, developed by Lorenz Rhomberg et al., can provide a stronger scientific basis for chemical assessment while improving transparency and preserving the appropriate scope of professional judgment. Their approach, while still evolving, relies on the explicit specification of the hypothesized basis for using the information at hand to infer the ability of an agent to cause human health impacts or, more broadly, affect other endpoints of concern. We describe and endorse such a hypothesis-based WoE approach to chemical evaluation.
After a decades-long decline in the availability of abortion training, opportunities for abortion training have increased. However, there is reason to be cautious in interpreting these results, including possible response bias and pressure to report the availability of abortion training because of new guidelines from the Accreditation Council for Graduate Medical Education.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.