Meta-analytic techniques were used to explore overall conclusions and variables moderating treatment effects in the research literature on school desegregation and black achievement. Studies were classified on the basis of the threats to their validity as either accepted or rejected for the analysis. For the initial analysis quasi-experimental studies were accepted, yielding an average effect size of .45. The better-designed studies had an average effect size of .34, which was reduced to .16 when adjusted for pretest differences. The National Institute of Education (NIE) convened an expert panel that reviewed and reanalyzed these results. An average pretest-adjusted effect size of .14 was found for the 19 studies selected for analysis by the NIE panel. An average effect size of .20 was found for the better-designed studies that had no selection problems. This is equivalent to two months of educational gain. The largest effects occurred among students moving from highly segregated to predominantly white schools. Reading achievement gains were larger than those for mathematics, but the difference was not statistically significant.
The purpose of this evaluation study is to identify problems and suggest modifications in the NIH Consensus Development Program. The current program consists of three-day conferences in which experts assess medical technologies for issues of efficacy, safety, conditions of use, and other related topics (e.g., costs and social impact). Eight consensus conferences held between 1980 and 1982 were studied in depth using a variety of methods; five of the conferences were investigated concurrently. In addition, archival material was examined for all but one of the 33 conferences held up to that time, and four planning meetings for future conferences were observed. The delay in publishing our findings provided an opportunity to examine the changes introduced by NIH; it also allowed us to avoid the criticism of numerous prior evaluations for finding fault with programs that are still developing. NIH adopted many of the recommendations in our evaluation report and has investigated others. Based on our evaluation and more recent evidence, however, we conclude that the major problem that was uncovered--selection bias, particularly with respect to the choice of questions and panelists--remains a significant threat to the credibility of the consensus process. More specifically, the results indicate that controversial issues cannot be properly addressed within the present conference format, although that was one of its major purposes. Recommendations for improving the consensus process are presented, as are their implications for a larger set of consensus activities that are currently being conducted.
Psychology is presently becoming enmeshed in research on social problems (Miller, 1969). There are many reasons for this: current demands for meaningful, relevant activities; new governmental programs for social problems research such as the National Science Foundation's Research Applied to National Needs (RANN); budgetary legislation mandating "accountability" for social-action programs; as well as the search for new areas of psychological expertise produced by a tightening job market for recent PhDs. Among psychologists, there has been a small but growing group which has welcomed this involvement and viewed it as a change for the better. To these psychologists, this involvement is the legitimization of the investigations they have been conducting for the last few years in a new discipline called evaluation research. In reality, evaluation research is emerging as an interdisciplinary umbrella for all social scientists working to assess the impact of the ever-increasing number of programs proposed as ameliorative solutions to social problems. During this time, many concepts, methods, and models such as internal and external validity, summative and formative evaluation, and the like have been devised to determine the impact of social change processes. The purpose of this article is to present a coherent, explanatory model of evaluation research from a psychological perspective that incorporates these ideas, and to indicate the roles in which psychologists trained in more traditional areas can contribute to this enter-
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.