A survey of American psychology training clinics was undertaken to determine the scope, nature, impact, and problems of evaluation research conducted in these settings. Survey questions explored evaluation of both clinical training and client treatment. Seventy-four usable responses (56%) were received, of which 68% reported current quantitative evaluation of client treatment and 61% reported current quantitative evaluation of clinical training. A wide variety of specific outcome measures were used with varying frequency. Most evaluation activities were exclusively supported by internal financing, with the clinic director the most likely collector of evaluation data and the clinic staff the most likely recipients of evaluation findings. Major obstacle to evaluation included resource constraints, staff resistance, pragmatic difficulties, and technological limitations. Forty-eight percent of the directors of clinics conducting treatment evaluation believed evaluation had a significant influence on policy, whereas 42% of those conducting training evaluation reported such influence. Several correlates of policy impact were also identified. Further plans to conduct evaluation were widespread, though not universal. The need for better measures, faculty resistance to evaluation, ways of improving policy impact, and the importance of increased communication across training sites are discussed.Although federal support for evaluation of mental health services is currently in doubt, the long-term trend is toward increasing emphasis on the use of scientifically valid data in the administrative and clinical decision-making process (Aaronson & Wilner, 1983;