A method for executing a detailed evaluation of educational software is described. Several issues are discussed, including the use of an experimental setting versus a field study, and the design of assessment instruments suited to evaluation of educational software. The instruments include a test of remedial skills, a l03-item test of conceptual understanding, and a system for recording students' use of the software. The method was used to evaluate ConStatS, a program for teaching conceptual understanding of probability and statistics. Preliminary results of the evaluation are presented.The past several years have 'seen a good deal of research that has helped characterize how technology can serve education. Much, though not all, of this work has fallen into either of two categories. On the one hand, there is the bigger picture, which includes the meta-analytic studies of Kulik and Kulik (1991) and the framework offered by Kozma and Bangert-Downs (1987) for studying software applied to learning and teaching. Such studies offer guidelines and expectations, which do not necessarily translate well into concrete concerns and choices for the instructor with fairly well-defineddiscipline-specificteaching concerns. On the other hand, there is a host of small, fairly individualized studies on the effect of specific educational technologies in local settings. Many of the smaller studies lack a robust enough design to permit transfer of results.Welsh (1993), Duncan (1993), Ransdell (1993, and Castellan (1993) have offered a set of guidelines for im-. proving evaluation of educational software. Their papers emphasize the value of evaluation results that permit instructors to make well-informed decisions about the effective use of technology for specific educational ends. The present study deals with many of the points raised in those papers. In particular, we present a method forThe research in this article was supported in part by the Fund for the Improvement of Postsecondary Education (FISPE), Grant 116AH70624. The authors would like to thank Durwood Marshall for statistical consultation, and Barbara Alarie, Paula Fisher, Christine Sossaman, and Joe Debold for document preparation support. Correspondence concerning this article may be sent to S. Cohen, Curricular Software Studio, Tufts University, Medford, MA 02155 (e-mail: SCohen@Jade.Tufts.edu).evaluating technology that has been developed over the past 2 years. This method is currently being used to evaluate ConStatS, a program developed at the Tufts University Curricular Software Studio for teaching introductory probability and statistics.' As we present the method, we will make reference to this evaluation. We believe that all or parts of this method can be useful for evaluating educational technology in general.
Where Does ConStatS Fit Into Educational Technology?To aid the description of just how ConStatS fits into the broad range of products that loosely define educational technology, we offer the following taxonomy for describing how technology might serve educat...