Most of the recent literature on the evaluation of instructional effectiveness has emphasized the need to develop comprehensive systems. However, a careful scrutiny of actual working systems of instructional evaluation reveals that student ratings of instructor and instruction is still the only component that is regularly obtained and used. Therefore, instructor/instructional evaluation has become synonymous with student rating/evaluation for those being judged. In an attempt to impugn the value of such ratings for faculty self improvement and/or promotion and tenure purposes, faculty and administrators have generated and perpetuated several myths concerning student ratings of instructors and instruction.In order to address 15 of the most common myths regarding student ratings of instructors and instruction, research spanning a 62-year period will be cited and summarized below.Myth 1: Students cannot make consistent judgments about the instructor and instruction because of their immaturity, lack of experience, and capriciousness.
An extensive review of the research concerning the effect of different variables on student ratings is presented. A study is then reported comparing the effects of different sets of instructions on student evaluations of the course and instructor. The results indicated that the students who were informed that the results of their ratings would be used for administrative decisions rated the course and instructor more favorably on all aspects than students who were informed that the results of their ratings would only be used by the instructor.In the mad rush to make courses "relevant" and meet new demands of accountability, colleges and universities have proposed many methods of evaluating the effectiveness of instruction. Such proposals generally indicate that many elements of the instructional setting need to be evaluated by several different audiences. Unfortunately, most proposals that are operationalized rest solely on the use of student ratings of instructors and informal colleague opinions. That students are able to provide reliable and valid evaluations of instructional quality has come to be recognized (Aleamoni, 1978;Costin et al., 1971).Much of the research on student rating of instructors has been concerned with the effect of different variables on these ratings. Due in part to the use of different course evaluation forms and to the use of differing research methodologies, the results of these investigations are often discrepant.Some of the variables which have been investigated include (a) reliability and validity of student ratings, (b) reliability and validity of student rating instruments, (c) class size, (d) sex of the student and sex of the instructor, (e)
The present study was designed to assess the effects on faculty performance of a combination of feedback and personal consultations using college student evaluations. Student evaluation feedback and personal consultations were conducted at !east a semester before any follow-up data were gathered. The results indicate that providing computerized results of college student evaluations along with individual faculty consulting sessions helped the instructors to significantly improve their student ratings on two instructional dimensions.
The relationship of sample size to number of variables in the use of factor analysis has been treated by many investigators. In attempting to explore what the minimum sample size should be, none of these investigators pointed out the constraints imposed on the dimensionality of the variables by using a sample size smaller than the number of variables. A review of studies in this area is made as well as suggestions for resolution of the problem.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.