2012
DOI: 10.1111/j.1540-4609.2011.00328.x
|View full text |Cite
|
Sign up to set email alerts
|

An Empirical Investigation of Student Evaluations of Instruction—The Relative Importance of Factors

Abstract: We analyzed over 100,000 student evaluations of instruction over four years in the college of business at a major public university. We found that the original instrument that was validated about 20 years ago is still valid, with factor analysis showing that the six underlying dimensions used in the instrument remained relatively intact. Also, we found that the relative importance of those six factors in the overall assessment of instruction changed over the past two decades, reflecting changes in the expectat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
24
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 16 publications
(27 citation statements)
references
References 31 publications
3
24
0
Order By: Relevance
“…Brightman (2005) points out, in order to effectively use SEIs for assessment, the instrument must first be valid. The validity of the instrument used at the College of Business of this large public university was established by Brightman et al (1989) and the instrument was revalidated in recent times by Nargundkar and Shrikhande (2012). Furthermore, the results of the SEIs should be appropriately normed for fair feedback to faculty.…”
Section: Discussionmentioning
confidence: 99%
“…Brightman (2005) points out, in order to effectively use SEIs for assessment, the instrument must first be valid. The validity of the instrument used at the College of Business of this large public university was established by Brightman et al (1989) and the instrument was revalidated in recent times by Nargundkar and Shrikhande (2012). Furthermore, the results of the SEIs should be appropriately normed for fair feedback to faculty.…”
Section: Discussionmentioning
confidence: 99%
“…Although SET validity and reliability have been frequently disputed, some authors state that these are valid tools to evaluate teaching (Grammatikopoulos, Linardakis, Gregoriadis, & Oikonomidis, 2014;Khong, 2014), and in some cases remain valid tools years after their initial implementation (Nargundkar & Shrikhande, 2012). Though assessing an instrument's validity is a continuous process, some researchers have indicated that SET have good overall reliability and validity with relatively few biases (Socha, 2013;Wright & Jenkins-Guarnieri, 2012).…”
Section: Strengths and Weaknesses Of Evaluationsmentioning
confidence: 99%
“…On the contrary, some other factors are not easily influenced and managed, such as student self-motivation, student learning style, required/elective courses ratio, and so forth. Rating teachers should be a valuable procedure for students as well, because it can lead to improvement of teaching quality, based on the stated opinions of the students (Marzano, 2012;Nargundkar & Shrikhande, 2012). The study of Taylor and Tyler (2012) strongly confirms the opinion that teachers develop skills and otherwise improve due to student evaluation.…”
Section: Introductionmentioning
confidence: 97%
“…Some previous studies have dealt with the issue of whether and to what extent the evaluation results truly reflect students' attitudes. According to most authors, teacher rating proved to be a good indicator of teaching effectiveness (e.g., Beran & Violato, 2005;Nargundkar & Shrikhande, 2012;Wiers-Jenssen, Stensaker, & Grogaard, 2003). Beran and Violato (2005) also found that teacher rating is, to a lesser extent, biased by some factors that are not related to the teachers themselves, such as students' grade expectations, attendance, and types of courses being evaluated.…”
Section: Introductionmentioning
confidence: 99%