Human service organizations (HSOs) currently exist in an environment in which they have to demonstrate both the efficient use of resources and effective service outcomes. As a response to these demands, externally imposed evaluations have been performed on organizations to determine the value of their services. Because of this approach, evaluation has come to be seen by many HSOs as a "top down" process-funders and their chosen evaluators define the outcomes to be measured, choose the measurement instruments and research designs to be utilized, create and shepherd the processes by which data are gathered, take sole responsibility for the analysis of data, and control the reporting and disscmination of results.At the same time that HSOs have had to adjust to these new accountability demands and the dominant way in which evaluations are conducted, empirical research has validated that the results of evaluation studies are underutilized by agcncies. McNeece, DiNitto, and Johnson (1983), in their study of 42 health directors of cotnmunity based organizations, reported that while 83% of the study participants discussed having been involved in evaluation research, only 25% noted that data on program effectiveness influenced their operational decisions. Weiss (1980), based on the reports of over 150 decision makers in community mental health agencies, reported that 33% of respondents did not use evaluation data in their decision making; another 10% of the sample, while answering that they utilized evaluation data, could not describe how it was used. Results of studies such as these have lcad some to conclude that there is "little direct and. immediate application of evaluation to programmatic decisions" (Alkin, 1990, p. 25).