2018
DOI: 10.1080/08832323.2018.1503584
|View full text |Cite
|
Sign up to set email alerts
|

Why peer evaluations in student teams don’t tell us what we think they do

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(10 citation statements)
references
References 66 publications
0
7
1
Order By: Relevance
“…Finally, we found moderate associations (average r = 0.35) between target effects and self-ratings, which is stronger than that reported in the previous studies of student teams (Loignon et al, 2017;Mayfield and Tombaugh, 2019). As with peer-peer consensus, peer-self agreement can be interpreted as evidence for the accuracy of both self-and peer-ratings (i.e., presumably the ratings of different raters converge because they are all influenced by a target's true skill level).…”
Section: Discussioncontrasting
confidence: 91%
See 3 more Smart Citations
“…Finally, we found moderate associations (average r = 0.35) between target effects and self-ratings, which is stronger than that reported in the previous studies of student teams (Loignon et al, 2017;Mayfield and Tombaugh, 2019). As with peer-peer consensus, peer-self agreement can be interpreted as evidence for the accuracy of both self-and peer-ratings (i.e., presumably the ratings of different raters converge because they are all influenced by a target's true skill level).…”
Section: Discussioncontrasting
confidence: 91%
“…Greguras et al (2001) found on average 26% of the variance in peer ratings was target variance and 29% was perceiver variance. Mayfield and Tombaugh (2019) found on average 22% of the variance in peer ratings was target variance and 36% was perceiver variance. Schmidt et al (2021) found on average 28% was target variance and 35% was perceiver variance.…”
Section: Discussionmentioning
confidence: 92%
See 2 more Smart Citations
“…A conscious decision was made to exclude peer evaluations from the list, as this would lead to the undesirable situation of two students de facto grading each other—a scenario that is especially unattractive since research on peer evaluations indicates that measuring performance in this way can be quite imprecise (Mayfield & Tombaugh, 2019). Note that in all included methods, the ultimate decision of any consequence for a suspected free-rider remained with the course coordinator (Cook, 1981).…”
Section: Methodsmentioning
confidence: 99%