With the increasing recognition and application of casemix for managing and financing healthcare resources, the evaluation of alternative versions of systems such as diagnosis-related groups (DRGs) has been afforded high priority by governments and researchers in many countries. Outside the United States, an important issue has been the perceived need to produce local versions, and to establish whether or not these perform more effectively than the US-based classifications. A discussion of casemix evaluation criteria highlights the large number of measures that may be used, the rationale and assumptions underlying each measure, and the problems in interpreting the results. A review of recent evaluation studies from a number of countries indicates that considerable emphasis has been placed on the predictive validity criterion, as measured by the R2 statistic. However, the interpretation of the findings has been affected greatly by the methods used, especially the treatment and definition of outlier cases. Furthermore, the extent to which other evaluation criteria have been addressed has varied widely. In the absence of minimum evaluation standards, it is not possible to draw clear-cut conclusions about the superiority of one version of a casemix system over another, the need for a local adaptation, or the further development of an existing version. Without the evidence provided by properly designed studies, policy-makers and managers may place undue reliance on subjective judgments and the views of the most influential, but not necessarily best informed, healthcare interest groups.
With the increasing recognition and application of casemix for managing and financing healthcare resources, the evaluation of alternative versions of systems such as diagnosis-related groups (DRGs) has been afforded high priority by governments and researchers in many countries. Outside the United States, an important issue has been the perceived need to produce local versions, and to establish whether or not these perform more effectively than the US-based classifications. A discussion of casemix evaluation criteria highlights the large number of measures that may be used, the rationale and assumptions underlying each measure, and the problems in interpreting the results. A review of recent evaluation studies from a number of countries indicates that considerable emphasis has been placed on the predictive validity criterion, as measured by the R2 statistic. However, the interpretation of the findings has been affected greatly by the methods used, especially the treatment and definition of outlier cases. Furthermore, the extent to which other evaluation criteria have been addressed has varied widely. In the absence of minimum evaluation standards, it is not possible to draw clear-cut conclusions about the superiority of one version of a casemix system over another, the need for a local adaptation, or the further development of an existing version. Without the evidence provided by properly designed studies, policy-makers and managers may place undue reliance on subjective judgments and the views of the most influential, but not necessarily best informed, healthcare interest groups.
This paper reports the results of an evaluation study of the Australian National Diagnosis Related Groups (AN-DRGs). The evaluation was based on statistical rather than clinical criteria with the principal goal being to provide information for the future development of the classification system. As well as comparing versions 1.0 to 3.0 of AN-DRGs, the project included a comparison of these systems with the most recent versions of the DRG systems from the United States. Taking all the evaluation criteria together, Version 3.0 of AN-DRGs performed best of all the systems except for the All Patient Refined (APR)-DRGs with its much larger number of groups. However, the differences between all the classifications were slight. Data of higher quality are needed if further refinements of the AN-DRGsare to produce substantial improvements in performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.