2024
DOI: 10.1101/2024.01.25.24301774
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Validation of the QAMAI tool to assess the quality of health information provided by AI

Luigi Angelo Vaira,
Jerome R. Lechien,
Vincenzo Abbate
et al.

Abstract: Objective: To propose and validate the Quality Assessment of Medical Artificial Intelligence (QAMAI), a tool specifically designed to assess the quality of health information provided by AI platforms. Study design: observational and valuative study Setting. 27 surgeons from 25 academic centers worldwide. Methods: The QAMAI tool has been developed by a panel of experts following guidelines for the development of new questionnaires. A total of 30 responses from ChatGPT4, addressing patient queries, theoretical q… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 40 publications
0
2
0
Order By: Relevance
“…Recent studies have significantly contributed to understanding the potential and limitations of AI in otolaryngology, emphasizing the need for rigorous validation of AI tools before their integration into clinical practice. For instance, the development and validation of the QAMAI tool demonstrate a systematic approach to evaluate AI-generated health information, showing robust construct validity and high internal consistency which could be instrumental in ensuring the reliability of AI platforms, including ChatGPT, within otolaryngology settings ( 23 ). Furthermore, the complexity of using AI for synthesizing clinical guidelines is highlighted by the variability in AI responses compared to expert consensus, underscoring the necessity for AI to be used with caution, particularly in complex medical fields like otolaryngology ( 24 ).…”
Section: Discussionmentioning
confidence: 99%
“…Recent studies have significantly contributed to understanding the potential and limitations of AI in otolaryngology, emphasizing the need for rigorous validation of AI tools before their integration into clinical practice. For instance, the development and validation of the QAMAI tool demonstrate a systematic approach to evaluate AI-generated health information, showing robust construct validity and high internal consistency which could be instrumental in ensuring the reliability of AI platforms, including ChatGPT, within otolaryngology settings ( 23 ). Furthermore, the complexity of using AI for synthesizing clinical guidelines is highlighted by the variability in AI responses compared to expert consensus, underscoring the necessity for AI to be used with caution, particularly in complex medical fields like otolaryngology ( 24 ).…”
Section: Discussionmentioning
confidence: 99%
“…The QAMAI tool stems from a methodology that has been well-validated and extensively applied for evaluating the quality of health information across various platforms, including websites [9], social networks [10], and other multimedia channels [11]. Through a panel of experts' guidelines, this tool underwent validation for its construct validity, internal consistency, and reliability using 40 LLM responses on colorectal surgery [12]. Opting for a qualitative analysis enabled the capture of subtleties and nuances in user experience and perception that frequently preclude quantitative measures.…”
Section: Methodsmentioning
confidence: 99%