2020
DOI: 10.1007/s00146-020-01066-z
|View full text |Cite
|
Sign up to set email alerts
|

Artificial intelligence and the value of transparency

Abstract: Some recent developments in Artificial Intelligence-especially the use of machine learning systems, trained on big data sets and deployed in socially significant and ethically weighty contexts-have led to a number of calls for "transparency." This paper explores the epistemological and ethical dimensions of that concept, as well as surveying and taxonomising the variety of ways in which it has been invoked in recent discussions. Whilst "outward" forms of transparency (concerning the relationship between an AI … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
39
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 53 publications
(39 citation statements)
references
References 21 publications
0
39
0
Order By: Relevance
“…These case studies also add to scholarly literature regarding AI transparency initiatives (Ouchchy et al 2020;Walmsley 2020;de Fine Licht and de Fine Licht 2020). The goal of such initiatives is often to respond to and ultimately deflect government regulation and oversight through promises of internal review and greater transparency-and these tactics are clearly employed by both Facebook and TikTok.…”
Section: Embedded Integrated and Increasingly Essentialmentioning
confidence: 80%
See 1 more Smart Citation
“…These case studies also add to scholarly literature regarding AI transparency initiatives (Ouchchy et al 2020;Walmsley 2020;de Fine Licht and de Fine Licht 2020). The goal of such initiatives is often to respond to and ultimately deflect government regulation and oversight through promises of internal review and greater transparency-and these tactics are clearly employed by both Facebook and TikTok.…”
Section: Embedded Integrated and Increasingly Essentialmentioning
confidence: 80%
“…In the following paper, I turn to a mix of algorithmic transparency initiatives and statements by representatives of Facebook and TikTok as case studies of how AI is embedded in these platforms, with attention to the promotion of AI content moderation as a solution to the circulation of problematic material and dis- and misinformation. Here, I add to conversations on AI transparency that include the variety of forms transparency takes (Walmsley 2020 ), how the ethical issues of AI are portrayed in media coverage and public discourse (Ouchchy et al 2020 ), the relationship between transparency and how the public perceives AI decision-making as legitimate or not (de Fine Licht and de Fine Licht 2020 ), as well as the promise of AI for content moderation (Gillespie 2020 ). Additionally, this analysis of Facebook and TikTok’s AI and algorithmic promotion strategies serves as a critical supplement to scholarly accounts that treat algorithmic systems as diffuse sociotechnical systems (Seaver 2017 ) that “live in dynamic relation to other material and discursive elements of software systems and the setting that produce them” (Dourish 2016 : 1).…”
Section: Introductionmentioning
confidence: 99%
“…In summation, this subsection has shown that we need to consider a possible tradeoff between transparency and adding inputs. As previously noted, the point here is 36 See, e.g., Walmsley (2020) for a discussion of how contestability can be decoupled from transparency. 37 See Danaher (2016) for an argument of the risks involved in using algorithms in political decisionmaking.…”
Section: Transparencymentioning
confidence: 93%
“…It is sufficient that it is common and problematic and hence deserves consideration. 34 Transparency is broadly considered an important property (see, e.g., AI HLEG, 2019;Brey et al, 2019;Danaher, 2016;Floridi et al, 2018;Wachter et al, 2017;Walmsley, 2020;cf. also Zerilli et al, 2018 for a more critical perspective and more references).…”
Section: Transparencymentioning
confidence: 99%
“…Importantly, the data processing of SCTs is often not transparent, in particular for the target audience of children. The side effect of opacity, as the absence of transparency, has been well documented in literature on social robots, AI and automated decision-making systems (Burrell 2016;Felzmann et al 2019aFelzmann et al , b, 2020Larsson & Heintz 2020;Wachter et al 2017;Walmsley 2020;Zerilli et al 2019). SCTs are complex cyber-physical products, whose inner workings might not be clear and transparent, neither for children, nor for parents and adults in general (Keymolen and Van der Hof 2019;Fosch-Villaronga et al 2018a, b).…”
Section: Technical Levelmentioning
confidence: 99%