BackgroundIn an integrated curriculum, multiple instructors take part in a course in the form of team teaching. Accordingly, medical schools strive to manage each course run by numerous instructors. As part of the curriculum management, course evaluation is conducted, but a single, retrospective course evaluation does not comprehensively capture student perception of classes by different instructors. This study aimed to demonstrate the need for individual class evaluation, and further to identify teaching characteristics that instructors need to keep in mind when preparing classes.MethodsFrom 2014 to 2015, students at one medical school left comments on evaluation forms after each class. Courses were also assessed after each course. Their comments were categorized by connotation (positive or negative) and by subject. Within each subject category, test scores were compared between positively and negatively mentioned classes. The Mann-Whitney U test was performed to test group differences in scores. The same method was applied to the course evaluation data.ResultsTest results for course evaluation showed group difference only in the practice/participation category. However, test results for individual class evaluation showed group differences in six categories: difficulty, main points, attitude, media/contents, interest, and materials. That is, the test scores of classes positively mentioned in six domains were significantly higher than those of negatively mentioned classes.ConclusionsIt was proved that individual class evaluation is needed to manage multi-instructor courses in integrated curricula of medical schools. Based on the students’ extensive feedback, we identified teaching characteristics statistically related to academic achievement. School authorities can utilize these findings to encourage instructors to develop effective teaching characteristics in class preparation.
Introduction: Medical students are motivated to engage actively in their studies. Yet at least 50% of medical students suffer from academic burnout. Using a social environmental perspective, this pilot study tested six hypotheses to account for medical student engagement and burnout via an effort-reward imbalance (ERI) model.Methods: This study measured ERI, over-commitment, engagement, burnout, negative affect, demographic variables, and test results during 2017. Seventy-nine medical students at a college of medicine in Seoul, Republic of Korea completed the online questionnaires (response rate: 20.73%). We used hierarchical regression analyses to examine the effects of ERI ratio, over-commitment, and the interaction between ERI ratio and over-commitment on engagement and burnout after adjusting for demographic variables and negative affect.Results: The ERI ratio was negatively related to engagement (p < 0.05), but over-commitment was positively related to engagement (p < 0.05). For burnout, affiliation, age, and negative affect were significant predictors. The ERI ratio was positively associated with burnout (p < 0.05). When we performed regression analyses on three sub-dimensions of engagement and burnout, the factors that affected each sub-dimension were different.Discussion: This pilot study revealed that the ERI ratio in school settings is a common factor for explaining the engagement and burnout of medical students. In addition, over-commitment significantly accounted for engagement, but it did not significantly account for burnout. These results for over-commitment may be explained by the unique characteristics of medical students.
Purpose Test equating studies in medical education have been conducted only for high-stake exams or to compare two tests given in a single course. Based on item response theory, we equated computer-based test (CBT) results from the basic medical education curriculum at the College of Medicine, the Catholic University of Korea and evaluated the validity of using fixed passing scores. Methods We collected 232 CBTs (28,636 items) for 40 courses administered over a study period of 9 years. The final data used for test equating included 12 pairs of tests. After test equating, Wilcoxon rank-sum tests were utilized to identify changes in item difficulty between previous tests and subsequent tests. Then, we identified gaps between equated passing scores and actual passing scores in subsequent tests through an observed-score equating method. Results The results of Wilcoxon rank-sum tests indicated that there were no significant differences in item difficulty distribution by year for seven pairs. In the other five pairs, however, the items were significantly more difficult in subsequent years than in previous years. Concerning the gaps between equated passing scores and actual passing scores, equated passing scores in 10 pairs were found to be lower than actual passing scores. In the other two pairs, equated passing scores were higher than actual passing scores. Conclusion Our results suggest that the item difficulty distributions of tests taught in the same course during successive terms can differ significantly. It may therefore be problematic to use fixed passing scores without considering this possibility.
Engagement has not been widely studied in the field of medical education. The purpose of this study was to determine the relationship between admission year and engagement, assuming that characteristics of admission cohorts might be different depending on year. Association between effort-reward imbalance (ERI) model and engagement was also reinvestigated. Data were collected from 164 students in The Catholic University of Korea, College of Medicine. Ninety-nine (18.97%) students in 2017 and 65 (12.38%) students in 2018 answered an online questionnaire measuring demographic variables, ERI, over-commitment (OC), negative affect, and engagement. Participants’ admission years were determined based on years in school they responded. Affiliation and year in school were removed because of their high correlation with admission year. Categorical regression analysis was performed. Admission year, binary ERI, and OC were significant explanatory variables in this categorical regression model (R2 = .312, Adjusted R2 = .255, F = 5.444, p = .000). Admission year, binary ERI, and OC accounted for 13.4%, 27.9%, and 9.4% of the importance in this model, respectively. Quantification plots for admission year and binary ERI showed that engagement was the highest in 2018 admission cohort but the lowest in 2013 admission cohort; being reciprocally rewarded for efforts was associated with higher scores of engagement. A certain admission cohort can be more engaged or less engaged in learning. This study also confirms that receiving proper rewards for efforts could be related to increase in engagement.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.