Outcomes from The Center for Advancement of Pharmacy Education (CAPE) are intended to represent the terminal knowledge, skills, and attitudes pharmacy students should possess and have guided delivery of pharmacy education for more than two decades. Advanced pharmacy practice experiences (APPEs) are the endpoint of pharmacy curricula where demonstration and assessment of terminal learning occurs. This review examines published literature in relation to the most recent CAPE outcomes to determine the extent to which they have been addressed during APPEs since 1996. Details related to the APPE focus, intervention(s)/learning setting(s), and assessments are summarized according to the 15 CAPE outcomes. Further, the assessments are categorized according to the level of learning achieved using an available method. Common CAPE outcomes are highlighted, as well as those for which published reports are lacking for APPEs. The range and quality of assessments are discussed and emphasize the need for continuous improvement of scholarly design and assessment.
Objective. To improve the reliability and discrimination of a pharmacy resident interview evaluation form, and thereby improve the reliability of the interview process. Methods. In phase 1 of the study, authors used a Many-Facet Rasch Measurement model to optimize an existing evaluation form for reliability and discrimination. In phase 2, interviewer pairs used the modified evaluation form within 4 separate interview stations. In phase 3, 8 interviewers individuallyevaluated each candidate in one-on-one interviews. Results. In phase 1, the evaluation form had a reliability of 0.98 with person separation of 6.56; reproducibly, the form separated applicants into 6 distinct groups. Using that form in phase 2 and 3, our largest variation source was candidates, while content specificity was the next largest variation source. The phase 2 g-coefficient was 0.787, while confirmatory phase 3 was 0.922. Process reliability improved with more stations despite fewer interviewers per station-impact of content specificity was greatly reduced with more interview stations. Conclusion. A more reliable, discriminating evaluation form was developed to evaluate candidates during resident interviews, and a process was designed that reduced the impact from content specificity.Keywords: psychometrics, interview, residency, reliability INTRODUCTIONPostgraduate year 1 (PGY1) pharmacy residencies are increasingly prevalent in the United States; however, the pool of resident applicants has surpassed the number of available positions, 1 and professional organizations anticipate continued growth.2,3 Establishing a fair and objective process for selecting residents seems essential, and reliability of the process is a key characteristic in ensuring fairness in these candidate assessments. 4 An essential element during a resident selection process is the interview. Lack of validity, objectivity, reliability, and structure of the interview process exists in medical residency, 5-8 medical school 9 and pharmacy school admissions. 10Selecting the best candidates for pharmacy residency training is a difficult task. The ideal interview and selection process for residency candidates would be one that was efficient, objective, and produced reliable feedback/ information that could be used to make informed decisions. The survey questions asked and criteria used to make decisions appear to be fairly consistent among residency programs.11,12 Psychometric developments over the past few decades may help in increasing reliability of tools used in the interview process. The objective structured clinical examination (OSCE) was first discussed in 1979 and its interview "offspring," the multiple miniinterview (MMI), was described in 2003. Both showed promise for improving interview reliability. The overall concept and purpose of the MMI is to reduce the impact of content specificity, a concern in assessments, on the interview as compared to a traditional individual interview. [13][14][15] In the MMI, candidates rotate through interview stations where different...
We read with interest the recent articles related to admissions interviews and appreciate that this has been an area of study and publication.1,2 We would, however, like to discuss the differences noted among the 2 most recent publications on this subject. To our understanding, the multiple mini-interview (MMI), as described by Cameron and colleagues, seems to describe the next generation of interviewing and an evolution from more traditional interview formats like that described by Kelsch and colleagues. While not stated directly in the Cameron article, content specificity is an important concern for interviewers, for which the MMI format was designed to address. This concern is not addressed with a singleoccurrence traditional interview, including those with multiple interviewers. (Admittedly, our college's current interview process is similar to that described by Kelsch.) Content specificity has been found within assessment types throughout education and is known to limit reliability.3,4 Literature discussing content specificity has suggested that little can be done to avoid it confounding results. It has, however, been demonstrated by and is a key concept behind the improved reliability of the objective structured clinical examination (OSCE) format over yesteryear's oral clinical examinations. In fact, the MMI is simply an admissions OSCE.5 Therefore, with larger numbers of MMI stations generally yielding less unreliability due to content specificity, incorporating an MMI seems a current best practice approach to control and minimize this score variability.As expected, in a recent analysis of our recent PGY1 program interview candidates, we also found this. Using generalizability theory, much like others in medical education have, our interview process had a variability that was explained by various facets. We established 4 separate panels/stations, each consisting of 2 interviewers, and interviewed 24 residency candidates. Analyzing the resulting data, candidates accounted for 74% of the variation (ie, true variance that we want), interview stations for 3.4%, interviewers for 2.5% (ie, inter-rater reliability), and candidate-station interaction (ie, content specificity) for 13.5%, while residual error was 6.6%. Notably, our reliability (g coefficient) was 0.787 and could improve to 0.847 if we had only 1 interviewer and 8 separate interview stations. To compare with Kelsch, our intraclass correlation was 0.832 and Cronbach alpha was 0.868. That is, we had slightly less inter-rater divergence, though this caused only minimal variance compared with other variance sources.Sinking substantial resources into attempts to alleviate concerns with inter-rater reliability (ie, training) should have perspective; we did not train our interviewers to use our interview rubric for this event. Others have been more condemning of training.6 While inter-rater congruency and reliability are important and often highly focused-upon area, literature has shown content specificity to have a larger role in decreasing reliability of...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.