We analyzed over 100,000 student evaluations of instruction over four years in the college of business at a major public university. We found that the original instrument that was validated about 20 years ago is still valid, with factor analysis showing that the six underlying dimensions used in the instrument remained relatively intact. Also, we found that the relative importance of those six factors in the overall assessment of instruction changed over the past two decades, reflecting changes in the expectations of the current millennial generation of students. The results were consistent across four subgroups studied -Undergraduate Core, Undergraduate Non-Core, Graduate Core and Graduate Non-Core classes, with minor differences. Student Motivation (the instructor's ability to motivate students) and grading/assignments (fairness and objectivity of grading practices) have superseded presentation ability in relative importance as indicators of overall teaching effectiveness. Our study has implications for teachers in terms of the appropriate areas to focus on for improving their teaching practices.
Student Evaluations of Instruction (SEIs) from about 6,000 sections over 4 years representing over 100,000 students at the college of business at a large public university are analyzed, to study the impact of noninstructional factors on student ratings. Administrative factors like semester, time of day, location, and instructor attributes like gender and rank are studied. The combined impact of all the noninstructional factors studied is statistically significant. Our study has practical implications for administrators who use SEIs to evaluate faculty performance. SEI scores reflect some inherent biases due to noninstructional factors. Appropriate norming procedures can compensate for such biases, ensuring fair evaluations.
Much of the e-education literature suggests that there is no significant difference in aggregate student learning outcomes between online and face-to-face instruction. In this study, we develop a model that forecasts the grade that individual students would have most likely earned in the alternate class setting. Students for whom the difference between the actual grade received in one class format (for example, online) and the forecasted grade in the other class setting (for example, face-to-face) is one full letter grade or higher are called “jumpers.” Our findings indicate that jumpers are numerous, suggesting that whereas no significant difference may exist between instruction settings at the aggregate level, at the individual level, the choice between settings matters. These results have important implications for the no significant difference literature and strongly support the need for refined course setting advisement for students.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.