The study reported herein was conducted to ascertain the nature of potential between-rater subjectivity in scoring quantitative and qualitative cloze based assessments of reading comprehension. Fifty students were randomly chosen from a group of 456 students used in the standardization of the Cloze Reading Inventory (CRI). As each student took four different forms of the CRI, there were 200 passages each rated by three of seven raters for a total of 600 passages used to determine interrater reliability. The seven raters each had a minimum of five years teaching experience, were from the elementary or secondary level, and were graduate students in a reading specialist program. The intraclass correlation method was employed to derive coefficients from the mean score ratings of the raters for each of the passages, each of the four interpretive values, and each of grades 3, 5, 7, 9, and 11. The results of the repeated measures analysis of variance indicated that interrater reliabilities were generally consistently high across passages, interpretive values, and grade levels.