Formal, experimental methods have proved increasingly difficult to implement, and lack the capacity to generate detailed results when evaluating the impact of CAL on teaching and learning. The rigid nature of experimental design restricts the scope of investigations and the conditions in which studies can be conducted It has also consistently failed to account for all influences on learning. In innovative CAL environments, practical and theoretical development depends on the ability fully to investigate the wide range of such influences. Over the past five years, a customizable evaluation framework has been developed specifically for CAL research. The conceptual approach is defined as Situated Evaluation of CAL (SECAL), and the primary focus is
Experiments that failedScientific, experimental methodology was previously considered to be the only acceptable approach to educational research. Two important principles of experimental design are:• to balance individual differences within study populations and so achieve generalizable results, and• to attempt to isolate the effects of a single resource for evaluation purposes.Problems with this approach were reported in the literature of the 1970s (Elton and Laurillard, 1979;MacDonald and Jenkins, 1979) when the influence on learning ofVolume 5 Number I individual and contextual factors was recognized. Similar issues emerged during the 1980s and early 1990s, (Bates, 1981;Spencer, 1991) when the inability to identify which single or combined factors supported learning became a recurrent problem. It was clear that prior knowledge, approaches to learning, provision of appropriate scaffolding, complementary combinations of resources and various contextual factors all influenced the quality of learning outcomes. It was concluded that evaluations must be designed to account for these factors, rather than to balance or disregard them as was previously the norm (Kemmis, 1987, Gunn, 1995.Another problem stemmed from the belief that single studies involving large sample sizes were necessary to produce meaningful results. The rather indiscriminate choice of study populations required to produce the requisite numbers frequently resulted in low motivation and levels of perceived relevance of evaluation tasks to personal and educational goals (Draper et al, 1996;. This suggested that the true potential for learning with CAL could not be reliably assessed unless its use formed an integral part of a course, and evaluations involved only the students on that course. It was thus concluded that the more specific aspects of CAL evaluation could not be served by a general and inflexible research methodology originally designed to measure the uniform and largely predictable behaviour of organisms in the physical sciences.
The basis of an alternative methodologyIn the context of the work reported here, development of a suitable methodology began with a review of educational research literature. Critical theory (Carr and Kemmis, 1986), critical ethnography (Angus, 1986) and qualitative methodo...