CONTEXT Programmes of assessment should measure the various components of clinical competence. Clinical reasoning has been traditionally assessed using written tests and performance-based tests. The script concordance test (SCT) was developed to assess clinical data interpretation skills. A recent review of the literature examined the validity argument concerning the SCT. Our aim was to provide potential users with evidence-based recommendations on how to construct and implement an SCT. RESULTS The search yielded 848 references, of which 80 were analysed. Studies suggest that tests with around 100 items (25-30 cases), of which 25% are discarded after item analysis, should provide reliable scores. Panels with 10-20 members are needed to reach adequate precision in terms of estimated reliability. Panellists' responses can be analysed by checking for moderate variability among responses. Studies of alternative scoring methods are inconclusive, but the traditional scoring method is satisfactory. There is little evidence on how best to determine a pass ⁄ fail threshold for high-stakes examinations.
METHODSCONCLUSIONS Our literature search was broad and included references from medical education journals not indexed in the usual databases, conference abstracts and dissertations. There is good evidence on how to construct and implement an SCT for formative purposes or medium-stakes course evaluations. Further avenues for research include examining the impact of various aspects of SCT construction and implementation on issues such as educational impact, correlations with other assessments, and validity of pass ⁄ fail decisions, particularly for high-stakes examinations.