2018
DOI: 10.1080/02602938.2018.1506909
|View full text |Cite
|
Sign up to set email alerts
|

Beyond Average: Contemporary statistical techniques for analysing student evaluations of teaching

Abstract: Student Evaluations of Teaching (SETs) have been used to evaluate Higher Education teaching performance for decades. Reporting SET results often involves the extraction of an average for some set of course metrics, which facilitates the comparison of teaching teams across different organisational units. Here, we draw attention to ongoing problems with the naive application of this approach. Firstly, a specific average value may arise from data that demonstrates very different patterns of student satisfaction. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
12
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
8

Relationship

2
6

Authors

Journals

citations
Cited by 9 publications
(13 citation statements)
references
References 44 publications
1
12
0
Order By: Relevance
“…One aspect to keep in mind is that these five teaching performance criteria refer to the didactic competencies that teachers display in their interaction with students in class or in practice, and these can be evaluated differently according to the stage of advancement in the professional training of the student being evaluated. The data from this study regarding differences by stage or academic cycle coincide with findings reported in studies on student evaluation, teaching and teacher performance in the context of higher education (Marsh and Hocevar, 1984;Kamran et al, 2012;Kalender, 2015;Müller et al, 2017;Kitto et al, 2019;Mocanu et al, 2021;Pérez-Villalobos et al, 2021). Furthermore, these differences in student perception may be related to student expectations and prior interest (van de Grift et al, 2016;Feistauer and Richter, 2018).…”
Section: Differences Among Groupssupporting
confidence: 82%
See 1 more Smart Citation
“…One aspect to keep in mind is that these five teaching performance criteria refer to the didactic competencies that teachers display in their interaction with students in class or in practice, and these can be evaluated differently according to the stage of advancement in the professional training of the student being evaluated. The data from this study regarding differences by stage or academic cycle coincide with findings reported in studies on student evaluation, teaching and teacher performance in the context of higher education (Marsh and Hocevar, 1984;Kamran et al, 2012;Kalender, 2015;Müller et al, 2017;Kitto et al, 2019;Mocanu et al, 2021;Pérez-Villalobos et al, 2021). Furthermore, these differences in student perception may be related to student expectations and prior interest (van de Grift et al, 2016;Feistauer and Richter, 2018).…”
Section: Differences Among Groupssupporting
confidence: 82%
“…A second hypothesis was the existence of significant differences in the evaluation of the didactic performance of the teacher, according to sex, age groups, and academic level (stage) of the psychology students. To date, little is known about possible differences according to the age of the student body in the appraisal of teaching in the context of University education, but differences in students' appraisal of teaching according to their level of advancement in their studies have been reported (Marsh and Hocevar, 1984;Kamran et al, 2012;Kalender, 2015;Müller et al, 2017;Kitto et al, 2019;Mocanu et al, 2021;Pérez-Villalobos et al, 2021). Similarly, differences have been reported according to the gender of the student body, with respect to the students' assessment of teaching and the performance of their teachers, in the context of higher education (Boring, 2015;Boring et al, 2016;Potvin and Hazari, 2016;Eouanzoui and Jones, 2017;Heffernan, 2021;Kreitzer and Sweet-Cushman, 2021;Valencia, 2021).…”
Section: Introductionmentioning
confidence: 99%
“…Artificial intelligence and data analytics (AIDA) is increasingly entering education: Standardised tests, such as the Programme for International Student Assessment: (PISA—http://www.oecd.org/pisa) evaluate student performance around the world (Sellar, Thompson & Rutkowski, ); academics are appointed and promoted based upon satisfaction scores sourced from students (Kitto, Williams, & Alderman, ); machine learning predicts which students might be at risk of failure based upon various datasets (Gašević, Dawson, Rogers, & Gasevic, ); text analysis can give students real time feedback on their writing (Gibson et al , ; Shibani, Knight, Buckingham Shum, & Ryan, ); intelligent tutoring and adaptive learning systems are leveraged to personalise content delivery (Feldstein & Hill, ); and companies and consortia such as Google, Burning Glass, Salesforce, IMS Global and ADL are marketing data, standards and suites of new tools to institutions (Kitto, O'Hara, et al , ).…”
Section: Introductionmentioning
confidence: 99%
“…Some consider relationships between satisfaction and student marks (or 'easiness', Felton et al, 2004) as evidence of bias, however we consider this relationship theoretically sound as higher student satisfaction should also enable higher participation and engagement, resulting in better marks (Richardson et al, 2012). Comprising Likert-type measures, SET-like surveys may also require complex statistical treatment (Kitto et al, 2019).…”
Section: Student Satisfactionmentioning
confidence: 99%