2022
DOI: 10.21203/rs.3.rs-1356281/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Dbias: Detecting biases and ensuring Fairness in news articles

Abstract: The problem of fairness is garnering a lot of interest in the academic and broader literature due to the increasing use of data-centric systems and algorithms in machine learning. This paper introduces Dbias (https://pypi.org/project/Dbias/), an open-source Python package for ensuring fairness in news articles. Dbias can take any text to determine if it is biased. Then, it detects biased words in the text, masks them, and suggests a set of sentences with new words that are bias-free or at least less biased. W… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 47 publications
0
1
0
Order By: Relevance
“…Analyzing the impact of these biases on the outputs generated by LLMs highlighted the potential for reinforcement of harmful stereotypes, emphasizing the need for continuous monitoring and improvement [6]. Sentiment analysis techniques were applied to detect and quantify biases in model responses, providing a quantifiable measure of bias presence [7,8]. Frameworks for bias detection were refined, incorporating advanced metrics that allowed for a more nuanced understanding of bias manifestations in LLMs [9].…”
Section: Bias and Fairness In Large Language Modelsmentioning
confidence: 99%
“…Analyzing the impact of these biases on the outputs generated by LLMs highlighted the potential for reinforcement of harmful stereotypes, emphasizing the need for continuous monitoring and improvement [6]. Sentiment analysis techniques were applied to detect and quantify biases in model responses, providing a quantifiable measure of bias presence [7,8]. Frameworks for bias detection were refined, incorporating advanced metrics that allowed for a more nuanced understanding of bias manifestations in LLMs [9].…”
Section: Bias and Fairness In Large Language Modelsmentioning
confidence: 99%