This chapter covers administration, scoring, and reporting scores from language tests and examinations. Procedures typically used in major language examinations and small‐scale classroom testing and assessment are both covered. It is argued that the administration, scoring, and reporting procedures are highly dependent on the purpose and stakes of the assessment. Selected national and international guidelines of good practice are reviewed to see what they have to say about these phases of the assessment process, and examples are given on how, for example, high stakes certification and achievement tests differ from teacher‐based formative and diagnostic tests.
The review considers how the skill tested impacts on how a test is administered and scored: In particular, testing of speaking and writing differ in many ways from testing comprehension. Also, the medium of assessment has a considerable effect, as modern information and communications technology (ICT) can radically change not only test administration but also scoring and reporting. ICT enables immediate reporting of results, more detailed feedback, and even automated scoring of performances.
An account is given of the research pertaining to administering, scoring, and reporting scores of language tests. Studies on test administration are not common except for types of testing where it is intertwined with the test format, such as in computerized testing, which is often compared with paper‐based testing, and in oral testing, where factors related to the setting, participants, and so forth have been studied. Scoring procedures have received considerable attention from researchers, especially when speaking and writing are concerned, and an outline and selected finding from these studies are presented. Reporting scores appears to be the least studied area, but even there some research into, for example, the meaningfulness of different reporting formats exists, and this is briefly described.