Chordoma is a low-grade notochordal tumor of the skull base, mobile spine and sacrum which behaves malignantly and confers a poor prognosis despite indolent growth patterns. These tumors often present late in the disease course, tend to encapsulate adjacent neurovascular anatomy, seed resection cavities, recur locally and respond poorly to radiotherapy and conventional chemotherapy, all of which make chordomas challenging to treat. Extent of surgical resection and adequacy of surgical margins are the most important prognostic factors and thus patients with chordoma should be cared for by a highly experienced, multi-disciplinary surgical team in a quaternary center. Ongoing research into the molecular pathophysiology of chordoma has led to the discovery of several pathways that may serve as potential targets for molecular therapy, including a multitude of receptor tyrosine kinases (e.g., platelet-derived growth factor receptor [PDGFR], epidermal growth factor receptor [EGFR]), downstream cascades (e.g., phosphoinositide 3-kinase [PI3K]/protein kinase B [Akt]/mechanistic target of rapamycin [mTOR]), brachyury—a transcription factor expressed ubiquitously in chordoma but not in other tissues—and the fibroblast growth factor [FGF]/mitogen-activated protein kinase kinase [MEK]/extracellular signal-regulated kinase [ERK] pathway. In this review article, the pathophysiology, diagnosis and modern treatment paradigms of chordoma will be discussed with an emphasis on the ongoing research and advances in the field that may lead to improved outcomes for patients with this challenging disease.
The published Class IV evidence suggests that bariatric surgery may be an effective treatment for IIH in obese patients, both in terms of symptom resolution and visual outcome. Prospective, controlled studies are necessary for better elucidation of its role.
BACKGROUND AND OBJECTIVES: General large language models (LLMs), such as ChatGPT (GPT-3.5), have demonstrated the capability to pass multiple-choice medical board examinations. However, comparative accuracy of different LLMs and LLM performance on assessments of predominantly higher-order management questions is poorly understood. We aimed to assess the performance of 3 LLMs (GPT-3.5, GPT-4, and Google Bard) on a question bank designed specifically for neurosurgery oral boards examination preparation. METHODS: The 149-question Self-Assessment Neurosurgery Examination Indications Examination was used to query LLM accuracy. Questions were inputted in a single best answer, multiple-choice format. χ2, Fisher exact, and univariable logistic regression tests assessed differences in performance by question characteristics. RESULTS: On a question bank with predominantly higher-order questions (85.2%), ChatGPT (GPT-3.5) and GPT-4 answered 62.4% (95% CI: 54.1%-70.1%) and 82.6% (95% CI: 75.2%-88.1%) of questions correctly, respectively. By contrast, Bard scored 44.2% (66/149, 95% CI: 36.2%-52.6%). GPT-3.5 and GPT-4 demonstrated significantly higher scores than Bard (both P < .01), and GPT-4 outperformed GPT-3.5 (P = .023). Among 6 subspecialties, GPT-4 had significantly higher accuracy in the Spine category relative to GPT-3.5 and in 4 categories relative to Bard (all P < .01). Incorporation of higher-order problem solving was associated with lower question accuracy for GPT-3.5 (odds ratio [OR] = 0.80, P = .042) and Bard (OR = 0.76, P = .014), but not GPT-4 (OR = 0.86, P = .085). GPT-4's performance on imaging-related questions surpassed GPT-3.5's (68.6% vs 47.1%, P = .044) and was comparable with Bard's (68.6% vs 66.7%, P = 1.000). However, GPT-4 demonstrated significantly lower rates of “hallucination” on imaging-related questions than both GPT-3.5 (2.3% vs 57.1%, P < .001) and Bard (2.3% vs 27.3%, P = .002). Lack of question text description for questions predicted significantly higher odds of hallucination for GPT-3.5 (OR = 1.45, P = .012) and Bard (OR = 2.09, P < .001). CONCLUSION: On a question bank of predominantly higher-order management case scenarios for neurosurgery oral boards preparation, GPT-4 achieved a score of 82.6%, outperforming ChatGPT and Google Bard.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.