Background In the past 3 months, OpenAI, a San Francisco based artificial intelligence (AI) research laboratory, has released ChatGPT, a conversation large language model (LLM). ChatGPT has the ability to answer user questions, admit to mistakes, and learn from users that are accessing the program. Objectives Due to the importance of producing evidence-based research in plastic surgery, the authors of this study wanted to determine how accurate ChatGPT could be in creating novel systematic review ideas that encompasses the diverse practice of cosmetic surgery. Methods ChatGPT was given commands to produce 20 novel systematic review ideas for 12 different topics within cosmetic surgery. For each topic, the system was told to give 10 general and 10 specific ideas that were related to the concept. In order to determine the accuracy of ChatGPT, a literature review was conducted using Pubmed (National Institutes of Health, Bethesda, MD), CINAHL (EBSCO Industries, Birmingham, AL), EMBASE (Elsevier, Amsterdam, the Netherlands), and Cochrane (Wiley, Hoboken, NJ). Results A total of 240 ‘novel’ systematic review ideas were constructed by ChatGPT. We determined that the system had an overall accuracy of 55%. When topics were stratified by general and specific ideas, we found that ChatGPT was 35% accurate for general ideas and 75% accurate for specific ideas. Conclusions ChatGPT is an excellent tool that should be utilized by plastic surgeons. ChatGPT is versatile and has uses beyond research including patient consultation, patient support, and marketing. As advancements in AI continue to be made, it is important for plastic surgeons to consider the use of AI in their clinical practice.
Background Developed originally as a tool for resident self-evaluation, the Plastic Surgery Inservice Training Examination (PSITE) has become a standardized tool adopted by plastic surgery residency programs. The introduction of large language models (LLMs), such as ChatGPT (OpenAI, San Francisco, CA), has demonstrated the potential to help propel the field of plastic surgery. Objectives The authors of this study wanted to assess whether or not ChatGPT could be utilized as a tool in resident education by assessing its accuracy on the PSITE. Methods Questions were obtained from the 2022 PSITE, which was present on the American Council of Academic Plastic Surgeons (ACAPS) website. Questions containing images or tables were carefully inspected and flagged before being inputted into ChatGPT. All responses by ChatGPT were qualified utilizing the properties of natural coherence. Responses that were found to be incorrect were divided into the following categories: logical, informational, or explicit fallacy. Results ChatGPT answered a total of 242 questions with an accuracy of 54.96%. The software incorporated logical reasoning in 88.8% of questions, internal information in 95.5% of questions, and external information in 92.1% of questions. When stratified by correct and incorrect responses, we determined that there was a statistically significant difference in ChatGPT’s use of external information (p <0.05). Conclusions ChatGPT is a versatile tool that has the potential to impact resident education by providing general knowledge, clarifying information, providing case-based learning, and promoting evidence-based medicine. With advancements in LLM and artificial intelligence (AI), it is possible that ChatGPT may be an impactful tool for resident education within plastic surgery.
Background: The accurate assessment of physician academic productivity is paramount and is frequently included in decisions for promotion and tenure. Current metrics such as h-index have been criticized for being biased toward older researchers and misleading. The relative citation ratio (RCR) is a newer metric that has been demonstrated within other surgical subspecialties to be a superior means of measuring academic productivity. We sought to demonstrate that RCR is a valid means of assessing academic productivity among plastic surgeons, and to determine demographic factors that are associated with higher RCR values. Methods: All Accreditation Council for Graduate Medical Education-accredited plastic and reconstructive surgery residency programs and faculty throughout the United States were compiled from the American Council of Academic Plastic Surgeons website. Demographic information was obtained for each surgeon via the program’s website, and RCR data were obtained utilizing iCite, a bibliometrics tool provided by the National Institutes of Health. Surgeons were excluded if any demographic or RCR data were unavailable. Results: A total of 785 academic plastic surgeons were included in this analysis. Surgeons who belonged to departments with more than six members had a higher median RCR (1.23). Increasing academic rank (assistant: 12.27, associate: 24.16, professor: 47.58), chief/chairperson status (47.58), male gender (25.59) and integrated model of residency training program (24.04) were all associated with higher median weighted RCR. Conclusions: RCR is a valid metric for assessing plastic surgeon academic productivity. Further research is warranted in assessing disparities among different demographics within academic plastic surgery.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.