2010
DOI: 10.1136/bmj.c2016
|View full text |Cite
|
Sign up to set email alerts
|

Using hospital mortality rates to judge hospital performance: a bad idea that just won't go away

Abstract: Standardised mortality rates are a poor measure of the quality of hospital care and should not be a trigger for public inquiries such as the investigation at the Mid Staffordshire hospital, say Richard Lilford and Peter Pronovost

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

3
192
0
2

Year Published

2011
2011
2016
2016

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 242 publications
(197 citation statements)
references
References 25 publications
3
192
0
2
Order By: Relevance
“…Over the past decade public healthcare resource allocation has shifted to PHC services at the expense of tertiary hospital budgets. 15 While there is no evidence that services at GSH have deteriorated during this period, and hospital mortality rates can be problematic as measures of quality of care, 17 these data cannot rule out this possibility.…”
Section: Discussionmentioning
confidence: 97%
“…Over the past decade public healthcare resource allocation has shifted to PHC services at the expense of tertiary hospital budgets. 15 While there is no evidence that services at GSH have deteriorated during this period, and hospital mortality rates can be problematic as measures of quality of care, 17 these data cannot rule out this possibility.…”
Section: Discussionmentioning
confidence: 97%
“…5,6 While many clinicians remain confused and sceptical about mortality measures, the concept of detecting 'preventable' hospital deaths has an intuitive appeal to the public, policymakers and politicians. In this paper, we aim to offer clinicians and NHS leaders some practical ways to learn from deaths in acute hospital care.…”
Section: Introductionmentioning
confidence: 99%
“…The latter are also more susceptible to case-mix variation, care processes outside the direct control of the QI team and to variation in how the case mix is coded. 10,[13][14][15][16] Practice variation can only be identified by collecting data on several providers or facilities and comparing the results. Therefore, measurement usually occurs at the regional or national level.…”
Section: Measuring Practice Variationmentioning
confidence: 99%
“…However, in clinical practice, it can be difficult to be sure that differences in mortality, or in other frequently used clinical outcomes (eg length of stay, readmission rate or patient-reported outcomes), are really the result of differences in the quality of care, rather than of the case mix ('But my patients are different: we treat sicker patients than other centres'). 14 Therefore, valid comparison between centres requires collection of reliable data on variables that might affect outcome (eg age, comorbidity and functional status) as well as reliable coding of the primary diagnosis. 'Coding depth' should also be assessed: hospitals that are systematically better at coding for comorbidities will appear 'better' in comparisons of mortality after adjustment for comorbidity.…”
Section: Interpreting Practice Variation: the Centre Effectmentioning
confidence: 99%