2001
DOI: 10.1207/s15327043hup1404_2
|View full text |Cite
|
Sign up to set email alerts
|

The Effects of Task Information Training and Frame-of-Reference Training With Situational Constraints on Rating Accuracy

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
9
0

Year Published

2004
2004
2019
2019

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 12 publications
(12 citation statements)
references
References 25 publications
3
9
0
Order By: Relevance
“…Moreover, task performance assessment revealed that the scores of both the group that received Training_4_Programmes and the group trained in FOR and PDimT were closer to those issued by the group of experts than the control group and other experimental groups. This result concurs with that of previous studies, where participants trained in the creation of a common frame-of-reference, either independently or combined with other types of training, gave more accurate appraisals than those who had received no training or minimal training [46,33,34,39,41,14,15,29,47,35,48,36,37,49,27,25,50,18,32,51,52,28,16,17,53,54]. The results of this study, unlike those obtained in other studies [23,7], show that training in Rater Error included in a combined programme does not affect appraisal accuracy.…”
Section: Discussionsupporting
confidence: 92%
See 2 more Smart Citations
“…Moreover, task performance assessment revealed that the scores of both the group that received Training_4_Programmes and the group trained in FOR and PDimT were closer to those issued by the group of experts than the control group and other experimental groups. This result concurs with that of previous studies, where participants trained in the creation of a common frame-of-reference, either independently or combined with other types of training, gave more accurate appraisals than those who had received no training or minimal training [46,33,34,39,41,14,15,29,47,35,48,36,37,49,27,25,50,18,32,51,52,28,16,17,53,54]. The results of this study, unlike those obtained in other studies [23,7], show that training in Rater Error included in a combined programme does not affect appraisal accuracy.…”
Section: Discussionsupporting
confidence: 92%
“…By contrasting with the fourth hypothesis (1.4 and 2.4), we have attempted to decide whether task and citizenship performance appraisal varies according to the type of training received. Several studies have analyzed the effectiveness of the various types of rater training along these lines [56,14,57,35,27,50,32,13,52,16,53,54,7]. Likewise, in order to evaluate the improvement of assessment accuracy, most studies [58,14,15,18,16,17,54,19] have used Cronbach’s indices [11] or Borman’s distance accuracy index [12].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…We averaged over the 'other' manipulation. Also, some studies, such as Sulsky, Skarlicki, and Keown-Gerrard (2002) and Keown-Gerrard and Sulsky (2001), introduced a revised version of FOR training and compared this revised version to both a traditional FOR training and a control session. We ignored all revised versions of FOR training and only report results based on traditional FOR training.…”
Section: Meta-analytic Proceduresmentioning
confidence: 99%
“…Historically, two rater training programs have received the most attention: Rater error training (RET) that aims to teach trainees to recognize and avoid common distributional rating errors, such as leniency, central tendency and halo (Latham, Wexley, & Pursell, 1975) and frame-of-reference training (FORT) that provides trainees with a common conceptualization of performance as a basis for observing and evaluating ratee behavior (Bernardin & Buckley, 1981). However, in recent years, FORT has dominated rater training research because of its reliable success in enhancing rating accuracy (Gorman & Rentsch, 2009;Keown-Gerrard & Sulsky, 2001;Melchers, Lienhardt, von Aarburg, & Kleinmann, 2011;Roch et al, 2012;Schleicher, Day, Mayes, & Riggio, 2002;Uggerslev & Sulsky, 2008). For this reason, FORT has increasingly been used for rater training in many areas where subjective judgments are used, including competency modeling (Lievens & Sanchez, 2007) assessment centers (Goodstone & Lopez, 2001;Jackson, Atkins, Fletcher, & Stillman, 2005;Mulder, Jorgensen, Nel, & Miering, 2013;Schleicher et al, 2002), and interviews (Melchers et al, 2011;Mills, 1999).…”
Section: Introductionmentioning
confidence: 99%