1985
DOI: 10.1111/j.1745-3984.1985.tb01056.x
|View full text |Cite
|
Sign up to set email alerts
|

The Concurrent Validity of Standardized Achievement Tests by Content Area Using Teachers' Ratings as Criteria

Abstract: To assess the concurrent validity of standardized achievement tests using teachers' ratings (and rankings) of pupils' academic achievement as criteria, 42 teachers evaluated each of their students (n = 1,032) in each of five major curricular areas prior to the administration of a battery of standardized achievement tests. The teachers were directed to rate each student's proficiency disregarding attendance, attitude, deportment, and so on. Within-class correlation coefficients were computed to eliminate rater … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
30
0

Year Published

1987
1987
2020
2020

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 51 publications
(31 citation statements)
references
References 5 publications
1
30
0
Order By: Relevance
“…For example, teacher measures compared with standardised test results for reading have generally been positively and signi cantly correlated, ranging from 0.41 to 0.73 (Airasian et al, 1977;Luce & Hoge, 1978;Hopkins et al, 1985;Wright & Wiese, 1988). Sharpley & Edgar (1986) separated reading into two components, vocabulary and comprehension, nding correlations between teachers and student measures of 0.42-0.44 and 0.50-0.56 for vocabulary and comprehension, respectively.…”
mentioning
confidence: 98%
See 1 more Smart Citation
“…For example, teacher measures compared with standardised test results for reading have generally been positively and signi cantly correlated, ranging from 0.41 to 0.73 (Airasian et al, 1977;Luce & Hoge, 1978;Hopkins et al, 1985;Wright & Wiese, 1988). Sharpley & Edgar (1986) separated reading into two components, vocabulary and comprehension, nding correlations between teachers and student measures of 0.42-0.44 and 0.50-0.56 for vocabulary and comprehension, respectively.…”
mentioning
confidence: 98%
“…Sharpley & Edgar (1986) separated reading into two components, vocabulary and comprehension, nding correlations between teachers and student measures of 0.42-0.44 and 0.50-0.56 for vocabulary and comprehension, respectively. Such correlations have been interpreted as indicating the validity of teachers' judgements (Hoge & Coladarci, 1989;Hopkins et al, 1985;Wright & Wiese, 1988), although many have accounted for only small shared variance between teacher and test measures. However, Hoge & Coladarci (1989) suggested that this may be a consequence of indirect estimates of achievement by teachers in the form of ratings and rankings; asking teachers to estimate directly student performance scores for a particular test does appear to improve correlations to a range of 0.68-0.82 (Doherty & Conolly, 1985;Wright & Wiese, 1988;Freeman, 1993).…”
mentioning
confidence: 99%
“…The KR-20 index of reliability for the total test exceeds .80, indicating good internal consistency (Mill er, 1992). Hopkins and Williams (1985) found the test to have significantly high concurrent validity by content area.…”
Section: Methodsmentioning
confidence: 94%
“…For indirect judgements, teachers do not know about the characteristics and tasks of the underlying achievement test and they have to rate students' overall achievement in a certain domain on a rating scale (e.g. Hopkins, George, & Williams, 1985). We will refer to that type of judgement as global judgements.…”
Section: Theoretical Frameworkmentioning
confidence: 99%