1997
DOI: 10.1097/00001888-199710001-00028
|View full text |Cite
|
Sign up to set email alerts
|

Feasibility and psychometric properties of using peers, consulting physicians, co-workers, and patients to assess physicians

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
49
0
5

Year Published

2002
2002
2016
2016

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 78 publications
(58 citation statements)
references
References 0 publications
4
49
0
5
Order By: Relevance
“…The amount of rater experience did not change the results. Similar results have been reported by Kreiter and colleagues 16 (more than 8 ratings needed), Kwolek et al 31 (7-8 ratings), Ramsey et al 32 (11 ratings), Violato et al 33 (10 ratings), and Kroboth et al 34 (6-10 ratings). All reports agree that somewhere between 7 and 11 ratings are necessary to achieve a generalizable global estimate of competence when raters are basing ratings on a nonsystematic sample of observations.…”
Section: Accuracy In Estimating Competence (Estimating From a Discretsupporting
confidence: 87%
See 1 more Smart Citation
“…The amount of rater experience did not change the results. Similar results have been reported by Kreiter and colleagues 16 (more than 8 ratings needed), Kwolek et al 31 (7-8 ratings), Ramsey et al 32 (11 ratings), Violato et al 33 (10 ratings), and Kroboth et al 34 (6-10 ratings). All reports agree that somewhere between 7 and 11 ratings are necessary to achieve a generalizable global estimate of competence when raters are basing ratings on a nonsystematic sample of observations.…”
Section: Accuracy In Estimating Competence (Estimating From a Discretsupporting
confidence: 87%
“…33,35,[54][55][56][57][58] These results suggest that either the individual competencies that make up clinical competence are highly correlated (i.e., individuals who perform one competency well are likely to perform well in the other competency areas); or that individual competencies are not highly correlated, but raters perceive them as such and rate them accordingly. Nathan and Lord 49 summarized their research on performance assessment by stating:…”
Section: Data Recallmentioning
confidence: 99%
“…[12][13][14][15][16][17] Its advantage lies in the synthesis of multiple perspectives to achieve a more stable, global rating. [18][19][20] An additional property is the ability to incorporate learner self-assessment in what is known as gap analysis. 16,18 Gap analysis examines the difference between the combined scores of ''expert'' faculty raters and the learner's self-score to obtain a measure of that individual's self-appraisal.…”
Section: Discussionmentioning
confidence: 99%
“…Thus, uptake of co-worker data for professional development purposes is predicated on reliable instrumentation. 15 The PAR instruments were psychometrically assessed with an examination of evidence for validity [16][17][18][19] and reliability [20][21][22][23][24][25] when they were first developed. While aspects of reliability have been reexamined for physician groups that have participated in PAR on more than one occasion, 12 there has not been a comprehensive examination of reliability for the co-worker instruments across all PAR specialties or specifically of reliability, a key component of validity.…”
Section: Co-worker Familiarity and Physician Multisource Feedbackmentioning
confidence: 99%