The aim of this study was to develop a criterion of graduate school success as an alternative to first‐year average. More specifically, faculty rating scales of students' analytical abilities were developed as a potential criterion against which to validate both the current Graduate Record Examinations (GRE) analytical measure and future modifications of it.
The rating instrument was based on previous research (Powers & Enright, 1986, 1987), which identified a number of independent dimensions underlying faculty perceptions of the importance of a wide variety of reasoning skills. The instrument included six separate scales for faculty to rate individual students with respect to their skills in analyzing arguments, drawing inferences, defining problems, reasoning inductively, and generating alternatives, as well as their overall analytical style.
The rating scales were completed by faculty members in a sample of 24 graduate departments representing six disciplines. Three important results have implications for the use of faculty ratings as a criterion of success. First, faculty raters were not able to distinguish among students on the six individual scales, which exhibited very high intercorrelations. This suggests that the rating instrument could be simplified for future use.
Secondly, although the ratings and first‐year grades were highly correlated, indicating that both criteria reflect success in graduate school, evidence that ratings and first‐year grades measure somewhat different aspects of success in graduate school was also found. Each of the three GRE General Test measures–verbal, quantitative, and analytical–was more highly correlated, on the average, with ratings than with first‐year averages. Undergraduate grades, on the other hand, correlated better with first‐year grades than with the ratings.
Finally, results were mixed with respect to the validity of faculty ratings of students' analytical abilities. When the three GRE measures were ranked with respect to their predictive effectiveness for each department, the analytical measure was significantly more often the best or second best predictor of faculty ratings than of first‐year average, while the verbal and quantitative measures tended to be the best predictors about equally often for ratings and grades. This suggests that the ratings may be more reflective of analytical ability than of verbal or quantitative ability. However, the verbal measure was, on average, more highly correlated with faculty ratings of students' analytical skills than was the analytical measure. This suggests that faculty ratings of students' analytical skills may have been influenced by students' verbal reasoning skills. This failure to find unequivocal evidence of discriminant validity of the ratings may reflect problems with the ratings, with the way in which faculty rated students, or with the discriminant validity of the analytical measure, A recommendation was made to continue research on the development of these scales.