2023
DOI: 10.3390/healthcare11222955
|View full text |Cite
|
Sign up to set email alerts
|

Reply to Moreno et al. Comment on “Sallam, M. ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns. Healthcare 2023, 11, 887”

Malik Sallam

Abstract: I would like to thank the authors for their commentary on the publication “ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns” [...]

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 11 publications
0
5
0
Order By: Relevance
“…This limitation of generative AI performance is particularly relevant in healthcare education, where the ability to apply knowledge creatively and critically is essential [43]. The observed limitation raises concerns about the current reliability of AI as an educational tool, which was reported in the context of various AI chatbots [7,18,34,44]. Collectively, these results highlight the critical areas for future development and improvement in AI training approaches.…”
Section: Discussionmentioning
confidence: 98%
See 1 more Smart Citation
“…This limitation of generative AI performance is particularly relevant in healthcare education, where the ability to apply knowledge creatively and critically is essential [43]. The observed limitation raises concerns about the current reliability of AI as an educational tool, which was reported in the context of various AI chatbots [7,18,34,44]. Collectively, these results highlight the critical areas for future development and improvement in AI training approaches.…”
Section: Discussionmentioning
confidence: 98%
“…This variability can be attributed to different factors, such as the AI model used, the prompting approach, and importantly the language(s) used in prompting [31,32]. Such ndings highlight the necessity for continued research to elucidate the determinants of AI models' performance, thereby informing the re nement of AI algorithms for improved performance and subsequent improved utility in various disciplines such as healthcare education [7,33,34].…”
Section: Introductionmentioning
confidence: 99%
“…Even though general purpose huge LLMs have demonstrated amazing capabilities to generalize to a wide variety of domains as zero-shot frameworks, their performance may still suffer in terms of relevance or specificity to a sensitive domain and may “ hallucinate ” while being forced to generate critical information 14,16,25 . Training of these models require humongous text corpora usually curated from publicly available text context available on the world wide web.…”
Section: Discussionmentioning
confidence: 99%
“…They may generate generic responses which can be incompetent to domain-specific questions. They also demonstrate known issues of hallucination which can be especially dangerous in the sensitive domains like medicine 13–16 . Research effort has been put into domain-specific LLM development such as MedPALM 17 (540B), MedAlpaca 18 (13B), and BioGTP 19 (1.5B).…”
Section: Introductionmentioning
confidence: 92%
“…Similarly, the potential of models such as ChatGPT to enhance medical practices, while addressing concerns about safety, ethics, and maintaining the human element in patient care, has been reviewed in [7]. A related systematic review evaluated the benefits and limitations of ChatGPT in healthcare education, research, and practice, based on an analysis of 60 publications from PubMed/MEDLINE and Google Scholar [14]. Insights into the opportunities and challenges of LLMs for biomedical or clinical usage, including their roles in pre-consultation, diagnosis, data management, and their support for medical education and writing, have also been offered in [15].…”
Section: Introductionmentioning
confidence: 99%