2010
DOI: 10.1016/j.nedt.2009.06.014
|View full text |Cite
|
Sign up to set email alerts
|

Developing and examining an Objective Structured Clinical Examination

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

2
23
0
4

Year Published

2012
2012
2019
2019

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 37 publications
(29 citation statements)
references
References 7 publications
2
23
0
4
Order By: Relevance
“…Marjan et al (2002) in their study showed that there was low reproducibility of checklist scores across tasks. Jones et al (2010) make the point that it is important for the content validity of the OSCE station that the marking criteria relate only to the skill that is being assessed, to identify those students who can/cannot perform a skill such as blood pressure. They suggest that as the student progresses methods of identifying discrete skills must be included within the overall care of the patient.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Marjan et al (2002) in their study showed that there was low reproducibility of checklist scores across tasks. Jones et al (2010) make the point that it is important for the content validity of the OSCE station that the marking criteria relate only to the skill that is being assessed, to identify those students who can/cannot perform a skill such as blood pressure. They suggest that as the student progresses methods of identifying discrete skills must be included within the overall care of the patient.…”
Section: Discussionmentioning
confidence: 99%
“…In Rushforth's (2007) review a key aspect of reliability explored in studies, is the accuracy of judgements made by examiners which is frequently reliant on single examiners. Jones et al (2010) suggest that it is important to establish whether there is a correlation between global ratings and the mark achieved. In our review the issue of having two markers at each station was raised by both the internal and external midwife experts as a positive and was perceived to help address the inter assessor reliability.…”
Section: Discussionmentioning
confidence: 99%
“…In addition to the previous advantages already outlined, Ulfvarson and Oxelmark [22] found that the OSCE can also be used for examining learning outcomes especially those comprising practical skills, such as medical techniques and interpretation of results. It has been recognized as a reliable and valid method to assess clinical skills competency [16,[39][40][41], and Carraccio and Englander [42] have suggested that the OSCE becomes a key standard for assessing clinical competence. Some criticisms of the OSCE have, however, been identified.…”
Section: Discussionmentioning
confidence: 99%
“…In order to enhance assessors' understanding, involvement and commitment to the OSCA, it may be helpful to include them in creating and piloting the assessment criteria used for examination (Schoonheim-Klein et al 2005). Further, in order to ensure objective interpretation of the assessment criteria it may be helpful to ensure the assessor is prepared with the use of practice guidelines and briefing the day of the examination thus promoting reliability (Jones et al 2010, Major, 2005. Rushford (2007) and Major (2005) also advocate practice sessions, using assessment tools in order to uncover and work through any discrepancies or questions in relation to the assessment criteria.…”
Section: Discussionmentioning
confidence: 99%
“…However, the evaluation of student skills and clinical practice is reliant on both the grading criteria as well as the professional judgment of the assessor (Parker 2009). The challenge in administering OSCAs is ensuring that the marking criteria and examination protocols are robust and transparent (Jones et al 2010). Moreover, individual assessors may grade students based on subjectiveness rather than the assessment criteria, and reliance on individuals to assess competence can lead to observer bias which can create inconsistency among assessors (Bourbonnais et al 2008;Norman et al 2002;Parker 2009).…”
Section: Introductionmentioning
confidence: 99%