2021
DOI: 10.1007/s40593-021-00248-0
|View full text |Cite
|
Sign up to set email alerts
|

Educating Software and AI Stakeholders About Algorithmic Fairness, Accountability, Transparency and Ethics

Abstract: This paper discusses educating stakeholders of algorithmic systems (systems that apply Artificial Intelligence/Machine learning algorithms) in the areas of algorithmic fairness, accountability, transparency and ethics (FATE). We begin by establishing the need for such education and identifying the intended consumers of educational materials on the topic. We discuss the topics of greatest concern and in need of educational resources; we also survey the existing materials and past experiences in such education, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0
2

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 49 publications
(23 citation statements)
references
References 48 publications
0
21
0
2
Order By: Relevance
“…Some of the records that did present interventions for teachers ( n = 4) were excluded because they did not evaluate (in its broadest definition) its effect on bias/equitable teaching. These either focused on the evaluation of algorithmic bias (Whalen and Glinksi, 1976 ; Bogina et al, 2021 ), targeted the bias of the learners, rather than the teachers (Dinnar et al, 2021 ), or targeted teachers' biases toward particular educational techniques, such as written homework and clicker polls, rather than bias against learners (Duzhin and Gustafsson, 2018 ).…”
Section: Resultsmentioning
confidence: 99%
“…Some of the records that did present interventions for teachers ( n = 4) were excluded because they did not evaluate (in its broadest definition) its effect on bias/equitable teaching. These either focused on the evaluation of algorithmic bias (Whalen and Glinksi, 1976 ; Bogina et al, 2021 ), targeted the bias of the learners, rather than the teachers (Dinnar et al, 2021 ), or targeted teachers' biases toward particular educational techniques, such as written homework and clicker polls, rather than bias against learners (Duzhin and Gustafsson, 2018 ).…”
Section: Resultsmentioning
confidence: 99%
“…To fill the disconnect between the disability and AI community, the process of stewarding accessibility datasets requires greater transparency of data use as well as awareness [17], especially to challenge inclusivity issues that are pressing for marginalized communities [30,76].…”
Section: Bringing Trustworthiness In Datamentioning
confidence: 99%
“…There is the danger that technological engineers, software developers and businesspeople do not necessarily have the same goals and incentives as individual consumers, ordinary citizens, policy makers, or societies at large. As such, given the rapid advancements, there may be the risk that the needs of some stakeholders would be overheard [13,15,27,31,71,95]. This is where digital humanists can assume a crucial role in these dynamics: with a focus not primarily on profit or technological concerns, they should research and discuss how such changes impact humans and societies, preserving a critical stance towards these developments and at the same time making constructive suggestions to help engineers who often have less time to dwell on ethical concerns, or have to meet deadlines and revenue goals.…”
Section: Implications For the (Digital) Humanitiesmentioning
confidence: 99%