A method for executing a detailed evaluation of educational software is described. Several issues are discussed, including the use of an experimental setting versus a field study, and the design of assessment instruments suited to evaluation of educational software. The instruments include a test of remedial skills, a l03-item test of conceptual understanding, and a system for recording students' use of the software. The method was used to evaluate ConStatS, a program for teaching conceptual understanding of probability and statistics. Preliminary results of the evaluation are presented.The past several years have 'seen a good deal of research that has helped characterize how technology can serve education. Much, though not all, of this work has fallen into either of two categories. On the one hand, there is the bigger picture, which includes the meta-analytic studies of Kulik and Kulik (1991) and the framework offered by Kozma and Bangert-Downs (1987) for studying software applied to learning and teaching. Such studies offer guidelines and expectations, which do not necessarily translate well into concrete concerns and choices for the instructor with fairly well-defineddiscipline-specificteaching concerns. On the other hand, there is a host of small, fairly individualized studies on the effect of specific educational technologies in local settings. Many of the smaller studies lack a robust enough design to permit transfer of results.Welsh (1993), Duncan (1993), Ransdell (1993, and Castellan (1993) have offered a set of guidelines for im-. proving evaluation of educational software. Their papers emphasize the value of evaluation results that permit instructors to make well-informed decisions about the effective use of technology for specific educational ends. The present study deals with many of the points raised in those papers. In particular, we present a method forThe research in this article was supported in part by the Fund for the Improvement of Postsecondary Education (FISPE), Grant 116AH70624. The authors would like to thank Durwood Marshall for statistical consultation, and Barbara Alarie, Paula Fisher, Christine Sossaman, and Joe Debold for document preparation support. Correspondence concerning this article may be sent to S. Cohen, Curricular Software Studio, Tufts University, Medford, MA 02155 (e-mail: SCohen@Jade.Tufts.edu).evaluating technology that has been developed over the past 2 years. This method is currently being used to evaluate ConStatS, a program developed at the Tufts University Curricular Software Studio for teaching introductory probability and statistics.' As we present the method, we will make reference to this evaluation. We believe that all or parts of this method can be useful for evaluating educational technology in general. Where Does ConStatS Fit Into Educational Technology?To aid the description of just how ConStatS fits into the broad range of products that loosely define educational technology, we offer the following taxonomy for describing how technology might serve educat...
Premise of the StudyBiological collections are uniquely poised to inform the stewardship of life on Earth in a time of cataclysmic biodiversity loss. Efforts to fully leverage collections are impeded by a lack of trained taxonomists and a lack of interest and engagement by the public. We provide a model of a crowd‐sourced data collection project that produces quality taxonomic data sets and empowers citizen scientists through real contributions to science. Entitled MicroPlants, the project is a collaboration between taxonomists, citizen science experts, and teachers and students from universities and K–12.MethodsWe developed an online tool that allows citizen scientists to measure photographs of specimens of a hyper‐diverse group of liverworts from a biodiversity hotspot.ResultsUsing the MicroPlants online tool, citizen scientists are generating high‐quality data, with preliminary analysis indicating non‐expert data can be comparable to expert data.DiscussionMore than 11,000 users from both the website and kiosk versions have contributed to the data set, which is demonstrably aiding taxonomists working toward establishing conservation priorities within this group. MicroPlants provides opportunities for public participation in authentic science research. The project's educational component helps move youth toward engaging in scientific thinking and has been adopted by several universities into curriculum for both biology and non‐biology majors.
We present a simple model of generative learning that permits us to define four kinds of interactions and a system for tracing and recording how students use educational technology. We believe that this model will maintain a link between interaction and learning, thus providing one method for the assessment of a wide range of educational technology environments. Two results are presented from an evaluation of ConStatS, a program for teaching conceptual understanding of probability and statistics. The results illustrate the kinds of insight into generative learning that a detailed trace method can provide.One of the most widely cited reasons for using computer technology in education is to help permit students to interact meaningfully with ideas and learn generatively (Cohen, Smith, Chechile, & Cook, 1994). The focus on education through interaction has taken on many forms, not all of which address instructional technology. Research on interaction has often been cast in process-outcome models. For instance, process-outcome models of teacher and classroom interaction have yielded insights into the effectiveness of instructional pace and the influence of teacher expectations (Brophy, 1986). Many of the results depend on the profile of the class and are not universally effective. Process-outcome models have also been used to investigate reasoning skills through performance on verbal analogy and classification problems (Alderton, Goldman, & Pelligrino, 1985). The models have been effective at isolating process differences between the most and least successful subjects, as well as providing evidence that common or similar processes are responsible for skills across domains.Much of the research on student interaction with instructional technology has made use of specific learning models that conditionalize responses on student input. Dede (1985) describes seminal examples ofsuch programs (e.g., Buggy and Debuggy) which are used for teaching subtraction and for diagnosing execution problems:Park and Tennyson (1983) discuss several models, including a Bayesian model of concept generalization that selects subsequent problems and examples on the basis of student interactive histories with the programs. In each case, the model made specific use of the interactions and generated a systematic but limited set of responses.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.