2023
DOI: 10.1007/s10439-023-03227-9
|View full text |Cite
|
Sign up to set email alerts
|

Exploring the Potential of Chat GPT in Personalized Obesity Treatment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
24
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 51 publications
(24 citation statements)
references
References 2 publications
0
24
0
Order By: Relevance
“…As their capabilities are inherently shaped by the corpus they were trained on, their efficacy across different medical domains may vary based on the representation of those domains in the original dataset. Therefore, while we may anticipate a degree of adaptability, evaluating their performance individually within each target sub-domain is advisable to ensure precision and validity (32)(33)(34)(35).…”
Section: Speed and Efficiencymentioning
confidence: 99%
See 1 more Smart Citation
“…As their capabilities are inherently shaped by the corpus they were trained on, their efficacy across different medical domains may vary based on the representation of those domains in the original dataset. Therefore, while we may anticipate a degree of adaptability, evaluating their performance individually within each target sub-domain is advisable to ensure precision and validity (32)(33)(34)(35).…”
Section: Speed and Efficiencymentioning
confidence: 99%
“…Potential for Personalization. Personalization is a crucial attribute for LLMs, focusing on integrating and interpreting user-specific information into their responses (34,35). In this study, we provide 'Dubravka', a custom-made mobile application, that interfaces with the GPT-3.5 Turbo API (Application Programming Interface).…”
Section: Here Figurementioning
confidence: 99%
“…23 Because GPT models rely on pattern recognition and statistical associations to generate responses, they do not really understand the context of a patient's unique situation. 24 In some cases, the program will generate scientifically plausible responses, which, in fact, are totally false, that is, "hallucination." 2 This false information can be misleading, also causing delayed or incorrect treatment.…”
Section: Scientific Researchmentioning
confidence: 99%
“…Likewise, ChatGPT should also be used with caution because of possible “major errors and biases.” It also has a “misinformation problem,” that is, it does not always give accurate information; thus, it could be “weaponized to spread disinformation” and create deep fakes 23 . Because GPT models rely on pattern recognition and statistical associations to generate responses, they do not really understand the context of a patient’s unique situation 24 . In some cases, the program will generate scientifically plausible responses, which, in fact, are totally false, that is, “hallucination.” 2 This false information can be misleading, also causing delayed or incorrect treatment.…”
Section: Scientific Researchmentioning
confidence: 99%
“…analyzing patient data to understanding complex medical literature, offering health information, and improving text writing, indicating the promising potential of future GPT versions. [7][8][9][10][11][12][13][14] Furthermore, ChatGPT can improve health service accessibility and quality, particularly for patients in remote areas, by providing medical information and aiding in the comprehension of complex medical data, thus facilitating informed decisions. [4,13,15] Thus, investigating ChatGPT̓s capacity to offer medical consultation represents a significant stride in potentially elevating public health̓s overall quality and accessibility.…”
mentioning
confidence: 99%