The purpose of our study was the development and validation of a modified electronic key feature exam of clinical decision-making skills for undergraduate medical students. Therefore, the reliability of the test (15 items), the item difficulty level, the item-total correlations and correlations to other measures of knowledge (40 item MC-test and 580 items of German MC-National Licensing Exam, Part II) were calculated. Based on the guidelines provided by the Medical Council of Canada, a modified electronic key feature exam for internal medicine consisting of 15 key features (KFs) was developed for fifth year German medical students. Long menu (LM) and short menu (SM) question formats were used. Acceptance was assessed through a questionnaire. Thirty-seven students from four medical schools voluntarily participated in the study. The reliability of the key feature exam was 0.65 (Cronbach's alpha). The items' difficulty level scores were between 0.3 and 0.8 and the item-total correlations between 0.0 and 0.4. Correlations between the results of the KF exam and the other measures of knowledge were intermediate (r between 0.44 and 0.47) as well as the learners' level of acceptance. The modified electronic KF examination is a feasible and reliable evaluation tool that may be implemented for the assessment of clinical undergraduate training.
BackgroundMedical knowledge encompasses both conceptual (facts or “what” information) and procedural knowledge (“how” and “why” information). Conceptual knowledge is known to be an essential prerequisite for clinical problem solving. Primarily, medical students learn from textbooks and often struggle with the process of applying their conceptual knowledge to clinical problems. Recent studies address the question of how to foster the acquisition of procedural knowledge and its application in medical education. However, little is known about the factors which predict performance in procedural knowledge tasks. Which additional factors of the learner predict performance in procedural knowledge?MethodsDomain specific conceptual knowledge (facts) in clinical nephrology was provided to 80 medical students (3rd to 5th year) using electronic flashcards in a laboratory setting. Learner characteristics were obtained by questionnaires. Procedural knowledge in clinical nephrology was assessed by key feature problems (KFP) and problem solving tasks (PST) reflecting strategic and conditional knowledge, respectively.ResultsResults in procedural knowledge tests (KFP and PST) correlated significantly with each other. In univariate analysis, performance in procedural knowledge (sum of KFP+PST) was significantly correlated with the results in (1) the conceptual knowledge test (CKT), (2) the intended future career as hospital based doctor, (3) the duration of clinical clerkships, and (4) the results in the written German National Medical Examination Part I on preclinical subjects (NME-I). After multiple regression analysis only clinical clerkship experience and NME-I performance remained independent influencing factors.ConclusionsPerformance in procedural knowledge tests seems independent from the degree of domain specific conceptual knowledge above a certain level. Procedural knowledge may be fostered by clinical experience. More attention should be paid to the interplay of individual clinical clerkship experiences and structured teaching of procedural knowledge and its assessment in medical education curricula.
In the context of using electronic flashcards, repetitive testing is a more potent learning strategy than repetitive studying for short-term but not long-term knowledge retention in clinical medical students. Although students use testing as a learning strategy, they seem to be unaware of its superiority in supporting short-term knowledge retention.
To compare different scoring algorithms for Pick-N multiple correct answer multiple-choice (MC) exams regarding test reliability, student performance, total item discrimination and item difficulty. Data from six 3rd year medical students' end of term exams in internal medicine from 2005 to 2008 at Munich University were analysed (1,255 students, 180 Pick-N items in total). Scoring Algorithms: Each question scored a maximum of one point. We compared: (a) Dichotomous scoring (DS): One point if all true and no wrong answers were chosen. (b) Partial credit algorithm 1 (PS(50)): One point for 100% true answers; 0.5 points for 50% or more true answers; zero points for less than 50% true answers. No point deduction for wrong choices. (c) Partial credit algorithm 2 (PS(1/m)): A fraction of one point depending on the total number of true answers was given for each correct answer identified. No point deduction for wrong choices. Application of partial crediting resulted in psychometric results superior to dichotomous scoring (DS). Algorithms examined resulted in similar psychometric data with PS(50) only slightly exceeding PS(1/m) in higher coefficients of reliability. The Pick-N MC format and its scoring using the PS(50) and PS(1/m) algorithms are suited for undergraduate medical examinations. Partial knowledge should be awarded in Pick-N MC exams.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.