Large-scale field evaluations of education programs typically present complex and competing design requirements that can rarely be satisfied by ideal, textbook solutions. This paper uses a recently completed national evaluation of the federally-funded Emergency School Aid Act (ESAA) Program to illustrate in concrete fashion some of the problems often encountered in major program evaluations, and traces the evolution of efforts in that three-year longitudinal study-both in the original design conceptualization and in the actual implementation and data analysis phases-to resolve competing demands and to provide as much methodological rigor as possible under field conditions. Issues discussed here include the selection of experimental versus quasi-experimental designs; the development of sampling procedures to provide maximum precision in treatment-control comparisons; the selection of achievement tests and difficulties in developing and administering other, non-cognitive outcome measures; and the importance of ascertaining whether the underlying assumptions of a true experimental design have been met before conclusions about program impact are drawn on the basis of treatment-control comparisons. at OAKLAND UNIV on June 8, 2015 http://jebs.aera.net Downloaded from 2 Coulson I. at OAKLAND UNIV on June 8, 2015 http://jebs.aera.net Downloaded from ESAA: Methodological Issues 3Readers may note that the report tends to dwell on problems, on difficult and sometimes controversial design decisions, and on compromise approaches. Little attention is paid to the more routine aspects of the evaluation in which traditional, textbook, solutions were applied to the evaluation design, the data collection efforts, or the data analyses. The reason for this approach is that it seems more important to convey a realistic picture of large-scale evaluations, where design tradeoffs, expedient solutions, and dirty data often prevail, than to sing the praise of the ESAA evaluation. Furthermore, there is much truth in the old saw that more is learned from difficulties than from simple solutions. Textbooks and graduate schools teach ideal solutions, but only practical experience can prepare researchers for the difficult decisions required in field studies. To keep things in perspective, however, it is important to point out that the ESAA evaluation was, by any standards, among the more rigorously designed and implemented studies of education programs ever conducted on a national scale. This rigor stemmed not only from the study's design and analyses, which emphasized experimental and quasi-experimental techniques, but also from the care that went into the instrument development efforts, and from the use of trained, independent data collectors for test administration, interviews and in-depth observations. Despite compromises required along the way, the basic integrity of the study's design was sufficiently well preserved so that considerable confidence can be placed in the evaluation findings. n. THE PROGRAMS BEING EVALUATED Although SDC's ESAA...
2 groups of 15 high school Ss received instruction in logic from a computer controlled autoinstructional device. In 1 group all Ss received a fixed sequence of 233 items. In the 2nd group each of the Ss received a different number and sequence of items, depending on the S's performance during the lesson. Branching decisions were based on errors and on the S's evaluation of his own readiness to advance to new topics. Posttest scores were significantly higher (.05 level) for the branching group than for the fixed sequence group; training time differences were not significant.
"In Experiment I, three groups of 17 subjects were used to test 2 hypotheses concerning optional branching. A fixed-sequence group received items in fixed order; a back-branching group receiving the same items as the first group, was permitted to back up one item at a time to review earlier items; a third group received the same items cast in statement form and organized into paragraphs permitting subjects to choose material at their own option. A significant difference on a posttest in favor of the third group was obtained when the first and third groups were compared. In Experiment II, a computer-controlled teaching machine was used to evaluate the effectiveness of adapting sequences of teaching items on logic. Members of a branching group received sequences of items determined by the errors that were made during instruction. Each member of a fixed sequence group was paired at random with one member of the branching group . . . . Covariance analysis of criterion scores using aptitude and training time as control variables yielded no significant difference between branching and fixed-sequence conditions."
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.