As student numbers on computer science courses continue to increase, the corresponding demands placed on teaching staff in terms of assessment grow ever stronger. In particular, the submission and assessment of practical work on large programming courses can present very significant problems. In response to this, we have developed a networked suite of software utilities that allow on-line submission, testing and marking of coursework. It has been developed and used over the course of five years, and has evolved into a mature tool that has greatly reduced the administrative time spent managing the processes of submission and assessment. In this paper, we describe the software and its implementation, and discuss the issues involved in its construction.
The problemAs has been documented elsewhere [1-3], it is generally not possible to assess the correctness of a program with a large degree of accuracy simply by inspecting the source code 722 M. LUCK AND M. JOY listings. While this is obviously true of large programs, it also holds even for small programs on introductory programming courses. The only way to arrive at an accurate assessment of a program is by running the program against several sets of test data, yet this is time-consuming, and can be prohibitive if done manually with large classes. It is possible to require students to provide evidence of their own testing, but this requires further skills on the part of the students that are not typically covered by introductory programming courses. Moreover, such tests might easily be faked by students modifying the output from their programs to give the desired results.By automating the processes of submission and testing, these problems, at least to some extent, can be addressed. Indeed, several distinct requirements of systems for such a purpose were identified by a recent panel discussion [1], and elsewhere [4,5].(a) The system must copy the student's source code program to a location accessible only by the instructor, noting the date and time of submission. (b) It should allow for multiple source files of various types, including documentation. (c) It should allow late submission of assignments. (d) Before submission, the system should compile and run the program against public test cases to alert the student to obvious errors. (e) After submission, the program should be compiled and run against several sets of test data.
Related workThere have been several attempts to address some of the concerns outlined above, with varying degrees of success. MacPherson [6] describes a system that allows students to work in a special course directory in their own filestore, but at the appropriate time transfers ownership to an instructor who can then subsequently run and test the programs. Canup and Shackelford [7] have developed a suite of programs for assisting in automatic submission, but do not address automated testing of programs. Isaacson and Scott [8] use a C shell script for automating the compilation and testing of student programs against sets of test data once...