Heart failure disease management programmes were developed from the mid-1990s to improve outcomes via education, support, and optimization of medicine regimens. Programmes were mostly targeted to patients after a previous hospitalization for heart failure. Support was often provided from various types of interdisciplinary teams via combinations of hospital-based clinics, home-based services, and telephone or remote electronic support providing a range of different content.
A building consensus: programmes workFrom the start, evidence of the positive effects of programmes was encouraging. The earliest trial 1 reported a 56% decrease in hospital admission after 90 days. Since then, more than 40 trials of different kinds of heart failure disease management programmes and more than 15 meta-analyses have been published. These systematic reviews identified that programmes improved all-cause re-hospitalization (9/11 reviews), heart failure-related hospitalization (8/9 reviews), and all-cause mortality (6/12 reviews). Accordingly, with seemingly strong and consistent evidence, international support for programmes has grown rapidly. Clinical guidelines recommend that providers make programmes available widely.
2Responses to inconsistent findings However, several large and good quality recent trials have found only small or no benefits from programmes. 3,4 From the Netherlands, one of the largest trials to date-COACH 5 -identified no differences over usual care from either a home-visit heart failure disease management programme in low intensity (4 visits) or high intensity (20 visits) forms. A very large Medicare-funded trial 6 of nine programmes for patients with heart failure and diabetes across the USA (n ¼ 30 000) found no benefits for hospitalization, mortality, patient satisfaction, care experience, self-care or mental/physical functioning, and costs far exceeded benefits.These findings should not be used to reduce support for programmes but do raise important questions about why programme outcomes vary.3,4 First, trial results are likely to be somewhat inconsistent because individual trials tend to be underpowered. That said, some of the larger trials are those with negative findings. Inconsistent effects from programmes can be attributed to positive elements of trial design such as atypically good usual care in comparison groups rather than biases, reporting inadequacies, or actual differences in programme effect size. 7 Attribution to positive factors is reasonable but risks bias via the selective interpretation of negative results. This tendency is not uncommon: a recent systematic review of trials with negative results 8 found that in 40% of cases, negative findings are 'spun' into affirmative results. To avoid bias, instead of downplaying or dismissing the significance of variations in findings, these should be acknowledged and explanations sought.
9In relation to programmes for heart failure, currently, explanations for variations in programme outcomes are constrained by the quality of trial reports. Although...