2011
DOI: 10.7861/clinmedicine.11-1-23
|View full text |Cite
|
Sign up to set email alerts
|

Content validity of a clinical problem solving test for use in recruitment to the acute specialties

Abstract: -Clinical problem solving tests (CPSTs) have been shown to be reliable and valid for recruitment to general practice (GP) training programmes. This article presents the results from a Department of Health-funded pilot into the use of a CPST designed for recruitment to the acute specialties (AS). The pilot paper consisted of 99 items from the validated GP question bank and 40 new items aimed specifically at topics of relevance to AS training. The CPST successfully differentiated between applicants. The overall … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
6
0

Year Published

2012
2012
2019
2019

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 5 publications
1
6
0
Order By: Relevance
“…Concerns with the construct validity of CPST have been raised. There was no firm evidence that the CPST validly tests problem-solving skills rather than knowledge (Patterson, Baron, et al 2009;Crossingham et al 2011). In pilot testing for selection processes into UK GP training, the CPST correlated with the SJT varying in the range of r ¼ 0.39 to r ¼ 0.53 (Patterson, Baron, et al 2009; (Patterson, Lievens, et al 2013).…”
Section: Selection Framework Based On Well-defined Criteria With Mulmentioning
confidence: 96%
“…Concerns with the construct validity of CPST have been raised. There was no firm evidence that the CPST validly tests problem-solving skills rather than knowledge (Patterson, Baron, et al 2009;Crossingham et al 2011). In pilot testing for selection processes into UK GP training, the CPST correlated with the SJT varying in the range of r ¼ 0.39 to r ¼ 0.53 (Patterson, Baron, et al 2009; (Patterson, Lievens, et al 2013).…”
Section: Selection Framework Based On Well-defined Criteria With Mulmentioning
confidence: 96%
“…patient outcomes) Reliability: inter-rater reliability, internal consistency Sensitivity: in relation to levels of performance (i.e. distinguishing poor from good performers) Transparency: people assessed understand the performance criteria against which they are being rated; availability of reliability and validity data Usability: simple framework, easy to train, easy to understand, easy to observe, domain-appropriate language, sensitive to rater workload Can provide a focus for training goals and needs Baselines for performance criteria are available and can be used appropriately by raters Minimal overlap between assessment components Evidence-based selection, using appropriately validated tasks and the concept of assessment/selection centres, is feasible across specialities, including acute care, 104 surgery, 105 106 and anaesthesia. 107 108 Gale and colleagues, 107 specifically, have shown correlations between performance within the assessment centre setting and job performance over the first year of the candidate's clinical appointment.…”
Section: Box 1 Characteristics Of a Good Non-technical/team Assessmenmentioning
confidence: 99%
“…collaboration or empathy) are actually assessed in medical school selection procedures, and whether this is in line with what was intended from their outcome-based focus (Christian et al 2010;Wilkinson and Wilkinson 2016). This means that selection may be considered a sort of 'black box' (Kreiter 2017;Kulasegaram 2017;Lievens et al 2008), a paradoxical situation in which selection tools may predict outcomes but which constructs are actually doing the predicting is uncertain (Cleland et al 2014;Crossingham et al 2011;Tiller et al 2013). It is essential to know more about what is actually being measured (i.e.…”
Section: Introductionmentioning
confidence: 99%