A professional problem that has been rather consistently ignored by both psychologists and educators concerns methods of determining the reliability of letter grades in single college courses. The stability of quality point averages and of other composite indices of academic achievement over various time periods has been investigated. The most recent study was reported by Clark (2). However, studies of the reliability of the single course letter grades comprising such achievement measures have yet to be reported. Such investigations are vitally needed for two reasons. One is that we need to know, as educators, how reliable are the achievement ratings that we give our students, especially as these ratings become part of the official academic record of the student and determine, in part at least, his future educational, employment, and military opportunities. Grading procedures found by such researches to lack reliability can then be modified to achieve a higher degree of consistency in the academic assessment. A second reason for such investigations is the widespread use in educational research of academic grades as predictor or criterion variables. All too often the disappointed psychologist who fails to find a high correlation between his pet freshmen entrance examination and later college grades glibly attributes the lack of relationship to the known unreliability of letter grades. The previously mentioned lack of evidence on grade reliability may be saving the professional reputation of many widely used freshmen entrance tests. In describing a &dquo;Grade&dquo; factor that has appeared in several factor analytic studies including academic marks as variables, French suggests (6) that this factor may be an artifact attributable to differences in the reliability of grading between different academic courses.