Background: Chat Generative Pre-trained Transformer (ChatGPT), OpenAI Limited Partnership, San Francisco, CA, USA is an artificial intelligence language model gaining popularity because of its large database and ability to interpret and respond to various queries. Although it has been tested by researchers in different fields, its performance varies depending on the domain. We aimed to further test its ability in the medical field. Methods: We used questions from Taiwan’s 2022 Family Medicine Board Exam, which combined both Chinese and English and covered various question types, including reverse questions and multiple-choice questions, and mainly focused on general medical knowledge. We pasted each question into ChatGPT and recorded its response, comparing it to the correct answer provided by the exam board. We used SAS 9.4 (Cary, North Carolina, USA) and Excel to calculate the accuracy rates for each question type. Results: ChatGPT answered 52 questions out of 125 correctly, with an accuracy rate of 41.6%. The questions’ length did not affect the accuracy rates. These were 45.5%, 33.3%, 58.3%, 50.0%, and 43.5% for negative-phrase questions, multiple-choice questions, mutually exclusive options, case scenario questions, and Taiwan’s local policy-related questions, with no statistical difference observed. Conclusion: ChatGPT’s accuracy rate was not good enough for Taiwan’s Family Medicine Board Exam. Possible reasons include the difficulty level of the specialist exam and the relatively weak database of traditional Chinese language resources. However, ChatGPT performed acceptably in negative-phrase questions, mutually exclusive questions, and case scenario questions, and it can be a helpful tool for learning and exam preparation. Future research can explore ways to improve ChatGPT’s accuracy rate for specialized exams and other domains.
Taiwanese students who graduated from Polish medical schools (P-IMGs) accounted for the second-largest group of international medical graduates in Taiwan. In 2009, domestic medical students in Taiwan staged mass demonstrations against P-IMG’s exemption from the qualifying test before the licensing exam. Although medical circles in Taiwan might still hold prejudices against P-IMGs, little is known about their career development. This study will analyze P-IMGs’ choices of specialties and training sites from 2000 to 2020 using data from the membership section of the Taiwan Medical Journal, the monthly official publication of the Taiwan Medical Association. Of 372 P-IMGs, 34.2% chose internal medicine and 17.1% surgery. Although academic medical centers offered 76% of all available trainee positions in a year, only 49.3% of P-IMGs received training there. By contrast, 20.9% of P-IMGs were trained at nonmetropolitan hospitals that altogether accounted for only 5.8% of trainee positions. In conclusion, P-IMGs had their residency training at less favorable specialties and sites. Their long-term career development deserves further study.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.