2010
DOI: 10.1080/02602930902795927
|View full text |Cite
|
Sign up to set email alerts
|

The impact of electronic media on faculty evaluation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2012
2012
2021
2021

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(6 citation statements)
references
References 46 publications
0
6
0
Order By: Relevance
“…Gamliel and Davidovitz (2005) report that standard deviations are higher for online evaluations, though report similar distributions of scores. Barkhi and Williams (2010) report that, though means for online evaluations are statistically significantly lower than for paper evaluations, when these data are controlled for course and instructor, no statistically significant difference is found between the means. report that online means were "marginally lower" (p. 50) (1.6% lower) than those for paper evaluations, but that the difference is "of little practical significance" (p. 50).…”
Section: Claimed Advantagesmentioning
confidence: 58%
See 2 more Smart Citations
“…Gamliel and Davidovitz (2005) report that standard deviations are higher for online evaluations, though report similar distributions of scores. Barkhi and Williams (2010) report that, though means for online evaluations are statistically significantly lower than for paper evaluations, when these data are controlled for course and instructor, no statistically significant difference is found between the means. report that online means were "marginally lower" (p. 50) (1.6% lower) than those for paper evaluations, but that the difference is "of little practical significance" (p. 50).…”
Section: Claimed Advantagesmentioning
confidence: 58%
“…8 Barkhi and Williams (2010) report that online evaluations produce more extreme evaluations than their paper counterparts, that is, the extremes of scales are used and harsh written responses are received. Gamliel and Davidovitz (2005) report that standard deviations are higher for online evaluations, though report similar distributions of scores.…”
Section: Claimed Advantagesmentioning
confidence: 99%
See 1 more Smart Citation
“…These results support the concurrent validity of both types of instruments, although Venette, Sellnow, and McIntyre (2010) reported that student comments in electronic evaluations are more detailed than are those in paper-and-pencil questionnaires. At the aggregate level, Barkhi and Williams (2010) noted that electronic SET scores are lower than are those obtained with paper-and-pencil surveys. These differences disappear, however, when controlling for course and instructor.…”
Section: Criterion-related Validitymentioning
confidence: 96%
“…This is all the more important since online evaluations of teaching in higher education have become increasingly common since the beginning of the new millennium (Crews and Curtis 2011;Morrison 2011;Venette et al 2010;Treischl and Wolbring 2017). The method's practicality, feasibility, flexibility, time-and cost-effectiveness when dealing with large samples and a large amount of data, and its potential to provide real-time feedback make it an attractive option that is increasingly replacing paper-and-pencil course evaluations (Barkhi and Williams 2010;Dommeyer et al 2004;Hessius and Johansson 2015;Layne et al 1999;Nulty 2008;Risquez et al 2015). As nowadays mobile devices, such as smartphones or tablet computers, are more and more widely used on campuses all over the world, a new line of discussion moves toward the practicability and feasibility of 'mobile' course evaluations (Champagne 2013;Hessius and Johansson 2015).…”
Section: Introductionmentioning
confidence: 99%