1988
DOI: 10.1111/j.1745-3984.1988.tb00290.x
|View full text |Cite
|
Sign up to set email alerts
|

A Practitioner's Guide to Computation and Interpretation of Reliability Indices for Mastery Tests

Abstract: From the perspective of teachers and test makers at the district or state level, current methods for obtaining reliability indices for mastery tests like the agreement coefficient and kappa coefficient are quite laborious. For example, some methods require two test administrations, whereas single administration approaches involve complex statistical procedures and require access to appropriate computer software. The present paper offers practitioners tables from which agreement and kappa coefficients can be re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
78
0
4

Year Published

1989
1989
2019
2019

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 76 publications
(83 citation statements)
references
References 5 publications
1
78
0
4
Order By: Relevance
“…Several indices have been proposed for estimating mastery reliability and can be found elsewhere. 11 Standard error of measurement At the individual test score level, we are often more interested in computing an expected measure of error. The standard error of measurement (SEM) provides such an estimate and is given by:…”
Section: Classical Test Theory Reliabilitymentioning
confidence: 99%
“…Several indices have been proposed for estimating mastery reliability and can be found elsewhere. 11 Standard error of measurement At the individual test score level, we are often more interested in computing an expected measure of error. The standard error of measurement (SEM) provides such an estimate and is given by:…”
Section: Classical Test Theory Reliabilitymentioning
confidence: 99%
“…The coefficient α for each of the three examinations exceeded 0.90, which meets testing industry standards. 4 This value indicates that the variability in scores is largely due to differences in the true abilities of the candidates. Pass/fail decision consistency for 2011, 2009, and 2007 was 0.88, 0.89, and 0.89, respectively, which indicates that approximately 90% of the candidates that took any one of the three examinations would receive the same pass/fail decision if retested with an equivalent examination.…”
Section: Examination Methodologymentioning
confidence: 99%
“…Este índice fluctúa entre 0 y 1 y evalúa la probabilidad de tomar la misma decisión si se repitiera la prueba. Cada programa tiene su propio punto de corte, pero para hacer este análisis se tomaron los primeros 108 puntajes (número total de cupos en esta convocatoria) para evaluar la concordancia de la decisión entre la prueba con tres o con cuatro opciones de respuesta y para ello se utilizaron las tablas de Subkoviak (17).…”
Section: Análisis Según La Teoría Clásica De Mediciónunclassified