2023
DOI: 10.1177/20539517231177620
|View full text |Cite
|
Sign up to set email alerts
|

Dislocated accountabilities in the “AI supply chain”: Modularity and developers’ notions of responsibility

Abstract: Responsible artificial intelligence guidelines ask engineers to consider how their systems might harm. However, contemporary artificial intelligence systems are built by composing many preexisting software modules that pass through many hands before becoming a finished product or service. How does this shape responsible artificial intelligence practice? In interviews with 27 artificial intelligence engineers across industry, open source, and academia, our participants often did not see the questions posed in r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
16
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 41 publications
(17 citation statements)
references
References 59 publications
1
16
0
Order By: Relevance
“…Modularity of software processes has been found to be a factor that may disconnect AI developers from accountability for their system [61]. A successful drill exposes the single modules and their interfaces, helping participants develop a better understanding of the responsible AI processes of the team as a whole.…”
Section: Mapping Responsibilities To Roles Within the Organisationmentioning
confidence: 99%
“…Modularity of software processes has been found to be a factor that may disconnect AI developers from accountability for their system [61]. A successful drill exposes the single modules and their interfaces, helping participants develop a better understanding of the responsible AI processes of the team as a whole.…”
Section: Mapping Responsibilities To Roles Within the Organisationmentioning
confidence: 99%
“…However, accounting for these potential benefits within the contexts of AI value chains enables us to identify many concomitant harms: novel insights or gains to efficiency in some parts of an AI value chain may raise new risks in others (Cobbe, Veale, & Singh, 2023;Gansky & McDonald, 2022;Widder & Nafus, 2023); contributions to SDGs or "AI for good" initiatives may only be successful relative to a narrow set of measures (Aula & Bowles, 2023;Madianou, 2021;Moore, 2019); economic prosperity or environmental benefits may be inequitably distributed across different groups, communities, or geographies. While AI systems may produce beneficial outcomes for some value chain actors, pre-existing structural injustices in the social, political, and economic contexts of AI systems and their value chains warrant an assumption that the same systems will also produce harmful outcomes for other actors, particularly those who belong to historically marginalized communities (Birhane, 2021;Hind & Seitz, 2022).…”
Section: Ai Value Chains and Benefits Of Aimentioning
confidence: 99%
“…These include: accuracy of model predictions, recommendations, decisions, data outputs, and other informational resources created through the development and use of machine learning models (Angwin et al, 2016;Bender et al, 2021;Grote & Berens, 2022;Mökander & Axente, 2023;Rankin et al, 2020); the development and implementation of ethical quality assurance practices for model training, testing, and management (Burr & Leslie, 2023;Eitel-Porter, 2021); use of cloudwork platforms and outsourcing practices in data work and model work to improve data quality and accuracy (Irani, 2015;Perrigo, 2023). Ethical concerns related to lack of transparency in machine learning technologies involved in AI value chains include: incentivization and disclosure of funding sources for AI development and AI ethics research (Ahmed, Wahed, & Thompson, 2023;Ochigame, 2019;Whittaker, 2021); documentation, disclosure, and explanation of machine learning and automated decision-making processes and outcomes Mitchell et al, 2019;Raji et al, 2020); inclusion or exclusion of stakeholder knowledges in model design, development, deployment, and application, particularly the exclusion of vulnerable data subjects, impacted groups, and marginalized communities (Birhane et al, 2022a(Birhane et al, , 2022bWidder & Nafus, 2023); distribution and enforcement of accountability and liability for harms amongst value chain actors (Bartneck et al, 2020;Brown, 2023;Cobbe, Veale, & Singh, 2023;European Commission, 2022;Zech, 2021); possibilities for collective organizing, and protest against discriminatory and harmful AI practices (e.g., ACLU, 2023;Broderick, 2023).…”
Section: Examples Of Related Resourcing Activitiesmentioning
confidence: 99%
See 1 more Smart Citation
“…We ask whether there is a system that allows us to relate different criteria to each other from a higher order. Reflecting on human-centricity requires a consideration of the perspectives on human-AI interaction (Anthony et al, 2023), the context characteristics of where AI is in use (Widder and Nafus, 2023), the individual demands of employees who are confronted with technology, and the responsibilities of stakeholders who are in charge of it (Polak et al, 2022). This is why we apply configurational theory (Mintzberg, 1993(Mintzberg, , 2023 to the meaning of the human-centricity of AI at work.…”
Section: Introductionmentioning
confidence: 99%