“…Recently, a considerable amount of literature has focussed on the subject of efficiency comparison among public health‐care trusts (Ellwood, 1996; Dawson and Street, 2000; Dawson et al , 2001; Jones, 2002; Northcott and Llewellyn, 2002, 2003, 2005; Llewellyn and Northcott, 2005; Barretta, 2005). A number of these studies (Ellwood, 1996; Jones, 2002; Northcott and Llewellyn, 2002, 2003; Barretta, 2005) have underlined that:- the users of inter‐trust efficiency analyses believe that some factors (such as differences in cost‐allocation methods and the presence of external and internal special circumstances) have an impact on data variability and conceal “real (in)efficiency”;
- the presence of these “disturbing factors” is one of the prime contributors to unreliable data; and
- the results obtained are only minimally used by trusts with the aim of benchmarking.
Literature has presented a variety of strategies aimed at neutralising some of the “disturbance factors” that impede the focalisation on “real (in)efficiency” in inter‐trust efficiency comparisons, such as:- defining and periodically updating a uniform costing system for all trusts (Ellwood, 1996; Jones, 2002; Northcott and Llewellyn, 2002, 2003);
- excluding any costs subjectively assigned to the cost object (so‐called indirect costs) from the analysis (Northcott and Llewellyn, 2003; Barretta, 2005); and
- creating clusters of trusts (or their sub‐units) that present similar peculiarities with respect to their internal or external environments (Dawson et al , 2001; Northcott and Llewellyn, 2003; Barretta, 2005).
…”