2018
DOI: 10.1038/d41586-018-05707-8
|View full text |Cite
|
Sign up to set email alerts
|

AI can be sexist and racist — it’s time to make it fair

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

2
296
0
8

Year Published

2018
2018
2023
2023

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 529 publications
(337 citation statements)
references
References 3 publications
2
296
0
8
Order By: Relevance
“…In light of its powerful transformative force and profound impact across various societal domains, AI has sparked ample debate about the principles and values that should guide its development and use 5,6 . Fears that AI might jeopardize jobs for human workers 7 , be misused by malevolent actors 8 , elude accountability or inadvertently disseminate bias and thereby undermine fairness 9 have been at the forefront of the recent scientific literature and media coverage. Several studies have discussed the topic of ethical AI [10][11][12][13] , notably in metaassessments [14][15][16] or in relation to systemic risks 17,18 and unintended negative consequences like algorithmic bias or discrimination [19][20][21] .…”
Section: Introductionmentioning
confidence: 99%
“…In light of its powerful transformative force and profound impact across various societal domains, AI has sparked ample debate about the principles and values that should guide its development and use 5,6 . Fears that AI might jeopardize jobs for human workers 7 , be misused by malevolent actors 8 , elude accountability or inadvertently disseminate bias and thereby undermine fairness 9 have been at the forefront of the recent scientific literature and media coverage. Several studies have discussed the topic of ethical AI [10][11][12][13] , notably in metaassessments [14][15][16] or in relation to systemic risks 17,18 and unintended negative consequences like algorithmic bias or discrimination [19][20][21] .…”
Section: Introductionmentioning
confidence: 99%
“…Natural language processing (NLP) algorithms have been reported to incorporate inherent bias when trained on human language . NLP techniques such as word embedding are now used to objectively evaluate gender and ethnic stereotypes in text data .…”
Section: Introductionmentioning
confidence: 99%
“…Finally, as MFMR seeks clusters that are unaffected by confounders like population 271 structure, age or sex, it may be useful for clustering in settings where protecting certain 272 information is important for privacy or fairness [69]. In this sense, MFMR is to GMM 273 roughly as AC-PCA [70] or contrastive PCA [71] are to ordinary PCA.…”
mentioning
confidence: 99%