2021
DOI: 10.1109/mts.2021.3056282
|View full text |Cite
|
Sign up to set email alerts
|

From Artificial Intelligence Bias to Inequality in the Time of COVID-19

Abstract: As secretAry generAl of the United Nations, Antonio Guterres said during the 2020 Nelson Mandela Annual Lecture, "COVID-19 has been likened to an X-ray, revealing fractures in the fragile skeleton of the societies we have built." Without a doubt, the COVID-19 pandemic has exposed and exacerbated existing global inequalities. Whether at the local, national, or international scale, the gap between the privileged and the vulnerable is growing wider, resulting in a broad increase in inequality across all dimension… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 24 publications
(15 citation statements)
references
References 39 publications
0
15
0
Order By: Relevance
“…AI systems are already contributing to SDG 11 in numerous cities around the world, but their use for smart cities has been criticized for lacking genuine sustainability and citizen-centric approach as well as for being focused on highly developed economies [187]. Moreover, several targets ( ) and potential bias [192,[200][201][202][203][204][205]. Indeed, AI has been widely criticized for augmenting inequality, bias, discrimination, and reproducing hierarchies [204].…”
Section: Group 3: Ai As a DImentioning
confidence: 99%
“…AI systems are already contributing to SDG 11 in numerous cities around the world, but their use for smart cities has been criticized for lacking genuine sustainability and citizen-centric approach as well as for being focused on highly developed economies [187]. Moreover, several targets ( ) and potential bias [192,[200][201][202][203][204][205]. Indeed, AI has been widely criticized for augmenting inequality, bias, discrimination, and reproducing hierarchies [204].…”
Section: Group 3: Ai As a DImentioning
confidence: 99%
“…The term 'bias in data' mainly refers to four a most common data biases in Machine Learning: observer bias 2 , which may appear when annotators are guided by their own opinion to label data; sampling bias, when data is acquired in such a way that not all samples have the same sampling probability 3 ; data handling bias, when the way of handling data distorts the classifier's output; and instrument bias, meaning imperfections in the instrument and/or method used to collect the data 4 . The occurrence of bias in data is a common, often unnoticed, and underestimated problem in most data analysis issues that often degrades or distorts the results 5 . In general, identifying and further removing biases is a tedious and difficult task.…”
Section: Introductionmentioning
confidence: 99%
“…Glasses or No Glasses: Kaggle dataset from the course T81-855: Applications of Deep Learning at Washington University in St. Louis5 The glasses annotations will be publicly available 7/11…”
mentioning
confidence: 99%
“…In this study, the author investigates how teachers have adapted their instructional practices to address the impact of the pandemic, the influence of globalization on curriculum development, and the integration of AI-powered tools in language education (Klimova et al, 2023;Mutongoza & Olawale, 2022;Luengo-Oroz et al, 2021). Moreover, the author explores how teachers perceive the evolving role of literature and language education in nurturing global competencies, fostering cultural understanding, and promoting linguistic diversity (Marais, 2021;Klimova et al, 2023).…”
Section: Introductionmentioning
confidence: 99%