DISCLAIMERThis report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed. or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, OK service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement. r a mmendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United S t a t u Government or any agency thereof. .
81.
61.
65.
68. 70
124
ABSTRACTRecent budgetary shortfalls and hightened concern over balancing the federal budget have placed increasing demand on federal agencies to document the cost effectiveness of the programs they manage. In fact, the 1993 Government Performance and Results Act (GPRA) requires that by 1997 each executive agency prepare a Strategic Plan that include measurable performance goals. Beginning in Fiscal Year 1999, agencies must also prepare an Annual Performance Plan that describes how actual program results will compare with performance goals. By the year 2000, the first round of Annual Reports will become due which describes actual program performance.Despite the growing emphasis on measuring performance of government programs, the technology policy literature offers little in terms of models that program managers can implement in order to assess the cost effectiveness of the programs they manage. To date, technology evaluation literature only consist of short-term indicators of performance.While GPRA will pose a major challenge to all federal government agencies, that challenge is particularly difficult for research-oriented agencies such as the Department of Energy. Its basic research programs provide benefits that are difficult to quantify since their values are uncertain with respect to timing, but are usually reflected in the value assigned to applied programs. The difficulty with quantifying benefits of applied programs relates to the difficulties of obtaining complete information on industries that have used DOE'S supported technologies in their production processes and data on cost-savings relative to conventional technologies.Therefore, DOE is one of several research-oriented agencies that has a special need for methods by which program offices can evaluate the broad array of applied and basic energy research programs they administer. The Office of Science and Technology Policy, which supported this project, seeks to aid DOES program offices in their efforts to evaluate programs. More specifically, this report seeks to familiarize program offices with available methods for conducting program evaluations. To a...