2024
DOI: 10.1002/pra2.1061
|View full text |Cite
|
Sign up to set email alerts
|

Assessing Gender and Racial Bias in Large Language Model‐Powered Virtual Reference

Jieli Liu,
Haining Wang

Abstract: To examine whether integrating large language models (LLMs) into library reference services can provide equitable services to users regardless of gender and race, we simulated interactions using names indicative of gender and race to evaluate biases across three different sizes of the Llama 2 model. Tentative results indicated that gender test accuracy (54.9%) and racial bias test accuracy (28.5%) are approximately at chance level, suggesting LLM‐powered reference services can provide equitable services. Howev… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 16 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?