2024
DOI: 10.1101/2024.10.24.24316073
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Identifying and Characterizing Bias at Scale in Clinical Notes Using Large Language Models

Donald U. Apakama,
Kim-Anh-Nhi Nguyen,
Daphnee Hyppolite
et al.

Abstract: Importance. Discriminatory language in clinical documentation impacts patient care and reinforces systemic biases. Scalable tools to detect and mitigate this are needed. Objective. Determine utility of a frontier large language model (GPT-4) in identifying and categorizing biased language and evaluate its suggestions for debiasing. Design. Cross-sectional study analyzing emergency department (ED) notes from the Mount Sinai Health System (MSHS) and discharge notes from MIMIC-IV. Setting. MSHS, a large urban hea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 31 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?