2023
DOI: 10.1101/2023.12.19.23300189
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

ChatGPT for tinnitus information and support: response accuracy and retest after three months

W. Wiktor Jedrzejczak,
Piotr H. Skarzynski,
Danuta Raj-Koziak
et al.

Abstract: BackgroundChatGPT – a conversational tool based on artificial intelligence – has recently been tested on a range of topics. However most of the testing has involved broad domains of knowledge. Here we test ChatGPT’s knowledge of tinnitus, an important but specialized aspect of audiology and otolaryngology. Testing involved evaluating ChatGPT’s answers to a defined set of 10 questions on tinnitus. Furthermore, given the technology is advancing quickly, we re-evaluated the responses to the same 10 questions 3 mo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 26 publications
0
2
0
Order By: Relevance
“…This study aimed to test repeatability using a set of standardized questions; however, we observed significant errors and variability in responses. This is particularly concerning when ChatGPT generates a narrative or answers an open-ended question without providing reliable sources or, in some cases, citing non-existent references [10]. The ability to track sources and verify responses is invaluable, particularly if responses can vary.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…This study aimed to test repeatability using a set of standardized questions; however, we observed significant errors and variability in responses. This is particularly concerning when ChatGPT generates a narrative or answers an open-ended question without providing reliable sources or, in some cases, citing non-existent references [10]. The ability to track sources and verify responses is invaluable, particularly if responses can vary.…”
Section: Discussionmentioning
confidence: 99%
“…Preliminary studies in audiology suggest that while ChatGPT, alongside other chatbots like Google Bard (now Gemini) and Bing Chat (now Copilot), shows promise, it also exhibits errors and inaccuracies that underscore the need for careful oversight when used in specialized fields [9]. This is particularly evident in some audiology subtopics such as tinnitus, where the responses, although quite impressive, occasionally stray from the topic and, crucially, totally lack citations [10]. These latter two studies suggest that ChatGPT has the potential to provide information in more specialized medical fields like audiology and in specific topics like tinnitus, but still requires improvement before being reliable enough for serious applications.…”
Section: Introductionmentioning
confidence: 99%