Decision-making in health care is inevitably undertaken in a context of uncertainty concerning the effectiveness and costs of health care interventions and programmes. One method that has been suggested to represent this uncertainty is the cost-effectiveness acceptability curve. This technique, which directly addresses the decision-making problem, has advantages over confidence interval estimation for incremental cost-effectiveness ratios. However, despite these advantages, cost-effectiveness acceptability curves have yet to be widely adopted within the field of economic evaluation of health care technologies. In this paper we consider the relationship between cost-effectiveness acceptability curves and decision-making in health care, suggest the introduction of a new concept more relevant to decision-making, that of the cost-effectiveness frontier, and clarify the use of these techniques when considering decisions involving multiple interventions. We hope that as a result we can encourage the greater use of these techniques.
BackgroundCost-effectiveness analysis involves the comparison of the incremental cost-effectiveness ratio of a new technology, which is more costly than existing alternatives, with the cost-effectiveness threshold. This indicates whether or not the health expected to be gained from its use exceeds the health expected to be lost elsewhere as other health-care activities are displaced. The threshold therefore represents the additional cost that has to be imposed on the system to forgo 1 quality-adjusted life-year (QALY) of health through displacement. There are no empirical estimates of the cost-effectiveness threshold used by the National Institute for Health and Care Excellence.Objectives(1) To provide a conceptual framework to define the cost-effectiveness threshold and to provide the basis for its empirical estimation. (2) Using programme budgeting data for the English NHS, to estimate the relationship between changes in overall NHS expenditure and changes in mortality. (3) To extend this mortality measure of the health effects of a change in expenditure to life-years and to QALYs by estimating the quality-of-life (QoL) associated with effects on years of life and the additional direct impact on QoL itself. (4) To present the best estimate of the cost-effectiveness threshold for policy purposes.MethodsEarlier econometric analysis estimated the relationship between differences in primary care trust (PCT) spending, across programme budget categories (PBCs), and associated disease-specific mortality. This research is extended in several ways including estimating the impact of marginal increases or decreases in overall NHS expenditure on spending in each of the 23 PBCs. Further stages of work link the econometrics to broader health effects in terms of QALYs.ResultsThe most relevant ‘central’ threshold is estimated to be £12,936 per QALY (2008 expenditure, 2008–10 mortality). Uncertainty analysis indicates that the probability that the threshold is < £20,000 per QALY is 0.89 and the probability that it is < £30,000 per QALY is 0.97. Additional ‘structural’ uncertainty suggests, on balance, that the central or best estimate is, if anything, likely to be an overestimate. The health effects of changes in expenditure are greater when PCTs are under more financial pressure and are more likely to be disinvesting than investing. This indicates that the central estimate of the threshold is likely to be an overestimate for all technologies which impose net costs on the NHS and the appropriate threshold to apply should be lower for technologies which have a greater impact on NHS costs.LimitationsThe central estimate is based on identifying a preferred analysis at each stage based on the analysis that made the best use of available information, whether or not the assumptions required appeared more reasonable than the other alternatives available, and which provided a more complete picture of the likely health effects of a change in expenditure. However, the limitation of currently available data means that there is substantial uncertainty associated with the estimate of the overall threshold.ConclusionsThe methods go some way to providing an empirical estimate of the scale of opportunity costs the NHS faces when considering whether or not the health benefits associated with new technologies are greater than the health that is likely to be lost elsewhere in the NHS. Priorities for future research include estimating the threshold for subsequent waves of expenditure and outcome data, for example by utilising expenditure and outcomes available at the level of Clinical Commissioning Groups as well as additional data collected on QoL and updated estimates of incidence (by age and gender) and duration of disease. Nonetheless, the study also starts to make the other NHS patients, who ultimately bear the opportunity costs of such decisions, less abstract and more ‘known’ in social decisions.FundingThe National Institute for Health Research-Medical Research Council Methodology Research Programme.
BackgroundCost-effectiveness analysis can guide policymakers in resource allocation decisions. It assesses whether the health gains offered by an intervention are large enough relative to any additional costs to warrant adoption. When there are constraints on the health care system’s budget or ability to increase expenditures, additional costs imposed by interventions have an “opportunity cost” in terms of the health foregone because other interventions cannot be provided. Cost-effectiveness thresholds (CETs) are typically used to assess whether an intervention is worthwhile and should reflect health opportunity cost. Nevertheless, CETs used by some decision makers—such as the World Health Organization that suggested CETs of 1 to 3 times the gross domestic product (GDP) per capita—do not.ObjectivesTo estimate CETs based on opportunity cost for a wide range of countries.MethodsWe estimated CETs based on recent empirical estimates of opportunity cost (from the English National Health Service), estimates of the relationship between country GDP per capita and the value of a statistical life, and a series of explicit assumptions.ResultsCETs for Malawi (the country with the lowest income in the world), Cambodia (with borderline low/low-middle income), El Salvador (with borderline low-middle/upper-middle income), and Kazakhstan (with borderline high-middle/high income) were estimated to be $3 to $116 (1%–51% GDP per capita), $44 to $518 (4%–51%), $422 to $1967 (11%–51%), and $4485 to $8018 (32%–59%), respectively.ConclusionsTo date, opportunity-cost-based CETs for low-/middle-income countries have not been available. Although uncertainty exists in the underlying assumptions, these estimates can provide a useful input to inform resource allocation decisions and suggest that routinely used CETs have been too high.
The use of decision-analytic modelling for the purpose of health technology assessment (HTA) has increased dramatically in recent years. Several guidelines for best practice have emerged in the literature; however, there is no agreed standard for what constitutes a 'good model' or how models should be formally assessed. The objective of this paper is to identify, review and consolidate existing guidelines on the use of decision-analytic modelling for the purpose of HTA and to develop a consistent framework against which the quality of models may be assessed. The review and resultant framework are summarised under the three key themes of Structure, Data and Consistency. 'Structural' aspects relate to the scope and mathematical structure of the model including the strategies under evaluation. Issues covered under the general heading of 'Data' include data identification methods and how uncertainty should be addressed. 'Consistency' relates to the overall quality of the model. The review of existing guidelines showed that although authors may provide a consistent message regarding some aspects of modelling, such as the need for transparency, they are contradictory in other areas. Particular areas of disagreement are how data should be incorporated into models and how uncertainty should be assessed. For the purpose of evaluation, the resultant framework is applied to a decision-analytic model developed as part of an appraisal for the National Institute for Health and Clinical Excellence (NICE) in the UK. As a further assessment, the review based on the framework is compared with an assessment provided by an independent experienced modeller not using the framework. It is hoped that the framework developed here may form part of the appraisals process for assessment bodies such as NICE and decision models submitted to peer review journals. However, given the speed with which decision-modelling methodology advances, there is a need for its continual update.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.