The COVID‐19 pandemic forced medical schools to rapidly transform their curricula using online learning approaches. At our institution, the preclinical Practice of Medicine (POM) course was transitioned to large‐group, synchronous, video‐conference sessions. The aim of this study is to assess whether there were differences in learner engagement, as evidenced by student question‐asking behaviors between in‐person and videoconferenced sessions in one preclinical medical student course. In Spring, 2020, large‐group didactic sessions in POM were converted to video‐conference sessions. During these sessions, student microphones were muted, and video capabilities were turned off. Students submitted typed questions via a Q&A box, which was monitored by a senior student teaching assistant. We compared student question asking behavior in recorded video‐conference course sessions from POM in Spring, 2020 to matched, recorded, in‐person sessions from the same course in Spring, 2019. We found that, on average, the instructors answered a greater number of student questions and spent a greater percentage of time on Q&A in the online sessions compared with the in‐person sessions. We also found that students asked a greater number of higher complexity questions in the online version of the course compared with the in‐person course. The video‐conference learning environment can promote higher student engagement when compared with the in‐person learning environment, as measured by student question‐asking behavior. Developing an understanding of the specific elements of the online learning environment that foster student engagement has important implications for instructional design in both the online and in‐person setting.
This study compares performance on free-response clinical reasoning examinations of first- and second-year medical students vs 2 models of a popular chatbot.
Important opportunities exist to improve inpatient SDM. Team size, number of learners, patient census, and type of decision being made did not affect SDM, suggesting that even large, busy services can perform SDM if properly trained.
In today's hospital and clinic environment, the obstacles to bedside teaching for both faculty and trainees are considerable. As electronic health record systems become increasingly prevalent, trainees are spending more time performing patient care tasks from computer workstations, limiting opportunities to learn at the bedside. Physical examination skills rarely are emphasized, and low confidence levels, especially in junior faculty, pose additional barriers to teaching the bedside examination.
Importance: Studies show that ChatGPT, a general purpose large language model chatbot, could pass the multiple-choice US Medical Licensing Exams, but the model's performance on open-ended clinical reasoning is unknown. Objective: To determine if ChatGPT is capable of consistently meeting the passing threshold on free-response, case-based clinical reasoning assessments. Design: Fourteen multi-part cases were selected from clinical reasoning exams administered to pre-clerkship medical students between 2019 and 2022. For each case, the questions were run through ChatGPT twice and responses were recorded. Two clinician educators independently graded each run according to a standardized grading rubric. To further assess the degree of variation in ChatGPT's performance, we repeated the analysis on a single high-complexity case 20 times. Setting: A single US medical school Participants: ChatGPT Main Outcomes and Measures: Passing rate of ChatGPT's scored responses and the range in model performance across multiple run throughs of a single case. Results: 12 out of the 28 ChatGPT exam responses achieved a passing score (43%) with a mean score of 69% (95% CI: 65% to 73%) compared to the established passing threshold of 70%. When given the same case 20 separate times, ChatGPT's performance on that case varied with scores ranging from 56% to 81%. Conclusions and Relevance: ChatGPT's ability to achieve a passing performance in nearly half of the cases analyzed demonstrates the need to revise clinical reasoning assessments and incorporate artificial intelligence (AI)-related topics into medical curricula and practice.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.