This chapter presents a brief history of the National Science Foundation's involvement in education evaluation, reviews the program evaluations that NSF's Division of Research, Evaluation, and Communication (REC) has conducted in recent years, and describes future directions. The evaluations have reflected the field-driven nature of NSF's programs, which are guided primarily by peer review. Three key issues have underlined NSF's policy regarding education evaluation: (1) the question must drive the methodology, with each evaluation expected to adhere to standards of evidence relevant to the approach chosen to answer the question; (2) there is a shortage of wellqualified evaluators for STEM (science, technology, engineering, and mathematics) education projects and programs; and (3) there is a serious lack of instruments of demonstrated validity and reliability to measure important outcomes of STEM education interventions, including teacher knowledge and skills, classroom practice, and student conceptual understanding in mathematics and science. There are also contextual factors that must be considered when examining evaluations that have been conducted. These factors have created limitations for evaluation that must be accepted or else NSF will operate its programs under other rationales, as discussed later.
Evaluation Network began in 1975. It built membership very rapidly, from the initial group of less than 100 to nearly 1,000 in less than one year. ENet obviously touched a need, and dues were then only $4. However, ENet wasn't prepared to handle a membership of this size; it quite rapidly dropped to about 500 and then increased to around 1,000 by 1980.From the outset, the organization was very loose. Newsletters, although of excellent quality, were published somewhat sporadically. Membership was handled similarly. Some people who sent money never heard anything from ENet. Others would receive EN for several years despite failing to pay dues. The annual conference was small and informal. In comparison to other professional meetings, the fact that the submission deadline was only a couple of months ahead of the conference allowed last-minute decisions and probably fostered presentations that were more &dquo;this is what I am doing right now&dquo; than &dquo;this is a piece of research I have completed.&dquo; (Incidentally, I don't believe this is bad-AERA, APA, ERS, and others provide opportunities to present research.) All of this is probably natural for a new professional group, particularly one operating on a shoestring. In time, the initial problems were solved-Evaluation News became regular in its publication, membership problems were cleared up, and the annual meeting became more structured. Then three major events happened:1. Dues were raised to $ 10. In dollars it wasn't much of a raise, but it took ENet out of the division or SIG membership league in terms of cost.2. Sage took over publishing EN and volunteered to do our membership list as well. This means greater publishing costs, an attendant loss of flexibility in handling memberships, and endless problems with our membership list. Only the last was unexpected.3. ENet began to meet with ERS. The annual meeting changed from an operation that meant a few hundred dollars' profit or less to a major source of income. With the additional income has come a marked at Harvard Libraries on July 1, 2015 aje.sagepub.com Downloaded from
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.