Purpose
This study investigated medical educators’ readiness for online teaching by exploring their perceived ability and importance of online teaching competencies and identified the highest priority of their educational needs.
Methods
In this study, 144 medical education faculty members from a university were invited to participate. The faculty online teaching readiness scale was virtually distributed at the end of the spring semester of 2020 and 38 faculty members responded for 2 weeks. The collected data were analyzed with descriptive statistics, paired t-tests, Borich Needs Assessment, and the Locus for Focus model.
Results
The overall average perceived ability was 2.76, while the overall average perceived importance was 3.36. The course design and the technical competency categories showed the highest and lowest educational needs, respectively. Five competencies were given the highest priority of educational needs.
Conclusion
The results revealed that the medical educators are not ready for online teaching; thus, urgent educational needs for online teaching competencies exist.
Purpose: Setting standards is critical in health professions. However, appropriate standard setting methods do not always apply to the set cut score in performance assessment. The aim of this study was to compare the cut score when the standard setting is changed from the norm-referenced method to the borderline group method (BGM) and borderline regression method (BRM) in an objective structured clinical examination (OSCE) in medical school.Methods: This was an explorative study to model of the BGM and BRM. A total of 107 fourth-year medical students attended the OSCE at seven stations with encountering standardized patients (SPs) and one station with performing skills on a manikin on 15 July 2021. Thirty-two physician examiners evaluated the performance by completing a checklist and global rating scales.Results: The cut score of the norm-referenced method was lower than that of the BGM (p<0.01) and BRM (p<0.02). There was no significant difference in the cut score between the BGM and BRM (p=0.40). The station with the highest standard deviation and the highest proportion of the borderline group showed the largest cut score difference in standard setting methods.Conclusion: Prefixed cut scores by the norm-referenced method without considering station contents or examinee performance can vary due to station difficulty and content, affecting the appropriateness of standard setting decisions. If there is an adequate consensus on the criteria for the borderline group, standard setting with the BRM could be applied as a practical and defensible method to determine the cut score for OSCE.
This study assessed the clinical performance of 150 third-year medical students in a whole-task emergency objective structured clinical examination station that simulates a patient with palpitation visiting the emergency department from November 25 to 27, 2019. The clinical performance was as assessed by the frequency and percentage of students who performed history taking (HT), physical examination (PE), electrocardiography (ECG) study, patient education (Ed), and clinical reasoning (CR), which were items on the checklist. There were 18.0% of students checked the patient’s pulse, 51.3% completed an ECG study, and 57.9% explained the results to the patient. There were 38.0% of students who did not even attempt an ECG study. In a whole-task emergency station, students were good at r HT and CR but unsatisfactory to the PE, ECG study, and Ed. Clinical skill educational programs should focus on PE, timely diagnostic tests, and sufficient Ed.
The overall reliability was below 0.70 and standardization of exam sites was unclear. To improve the quality of exam, case development, item design, training of standardized patients and assessors, and standardization of sites are necessary. Above of all, we need to develop the well-organized matrix to measure the quality of the exam.
Purpose: This study investigated whether the reliability was acceptable when the number of cases in the objective structured clinical examination (OSCE) decreased from 12 to 8 using generalizability theory (GT).Methods: This psychometric study analyzed the OSCE data of 439 fourth-year medical students conducted in the Busan and Gyeongnam areas of South Korea from July 12 to 15, 2021. The generalizability study (G-study) considered 3 facets—students (p), cases (c), and items (i)—and designed the analysis as p(i:c) due to items being nested in a case. The acceptable generalizability (G) coefficient was set to 0.70. The G-study and decision study (D-study) were performed using G String IV version 6.3.8 (papawork.com).Results: All G coefficients except for July 14th (0.69) were above 0.70. The major sources of variance components (VCs) were items nested in cases (i:c), from 51.34% to 57.70%, and residual error (pi:c), from 39.55% to 43.26%. The proportion of VCs in cases was negligible, ranging from 0.00% to 2.03%. Conclusion: The case numbers decreased in the 2021 Busan and Gyeongnam OSCE. However, the reliability was acceptable. In the D-study, reliability was maintained at 0.7 or higher if there were more than 21 items/case in 8 cases and more than 18 items/case in 9 cases. However, according to the G-study, increasing the number of items nested in cases rather than the number of cases could further improve reliability. The consortium needs to maintain a case bank with various items to implement a reliable blueprinting combination for the OSCE.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.