2012
DOI: 10.3928/01484834-20111130-03
|View full text |Cite
|
Sign up to set email alerts
|

Assessing the Reliability, Validity, and Use of the Lasater Clinical Judgment Rubric: Three Approaches

Abstract: The purpose of this article is to summarize the methods and findings from three different approaches examining the reliability and validity of data from the Lasater Clinical Judgment Rubric (LCJR) using human patient simulation. The first study, by Adamson, assessed the interrater reliability of data produced using the LCJR using intraclass correlation (2,1). Interrater reliability was calculated to be 0.889. The second study, by Gubrud-Howe, used the percent agreement strategy for assessing interrater reliabi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
41
0
4

Year Published

2013
2013
2022
2022

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 108 publications
(47 citation statements)
references
References 22 publications
2
41
0
4
Order By: Relevance
“…These reports contain insufficient details of sample, methods, and limitations. The most recent academic publication of the LCJR's reliability includes the results of three studies presented in one article: Adamson, who had the largest sample size used intra-class correlation (2,1) with a reported inter-rater reliability of 0.889; Gubrud used the percent agreement strategy with a reported range of inter-rater reliability of 92-96%; Sideras used level of agreement with a reported range of inter-rater reliability of 57-100% (Adamson et al, 2012).…”
Section: Reliabilitymentioning
confidence: 98%
“…These reports contain insufficient details of sample, methods, and limitations. The most recent academic publication of the LCJR's reliability includes the results of three studies presented in one article: Adamson, who had the largest sample size used intra-class correlation (2,1) with a reported inter-rater reliability of 0.889; Gubrud used the percent agreement strategy with a reported range of inter-rater reliability of 92-96%; Sideras used level of agreement with a reported range of inter-rater reliability of 57-100% (Adamson et al, 2012).…”
Section: Reliabilitymentioning
confidence: 98%
“…Nicholson et al (2009Nicholson et al ( , 2013) use the Rash model to explore psychometric properties of a rubric, and explore inter-rater reliability with CCI and internal consistency with Cronbach's Alpha. Lasater (2007) develop a rubric using a qualitative-quantitative-qualitative design, and later other authors have studied psychometric properties (Adamson et al, 2012) (Shin et al, 2015). Other rubrics have been developed by consensus and this might affect their content validity (Torres Manrique et al, 2012).…”
Section: Accepted M Manuscriptmentioning
confidence: 99%
“…Lasater (2007) reported a continuation of "predictive validity studies formalizing the correlation between the simulation laboratory and clinical setting, and studies of interrater reliability" (p. 503). Adamson et al (2014) has also established the reliability and validity of the LCJR with simulation using standardized human patients (Adamson et al, 2012).…”
Section: Methodsmentioning
confidence: 99%