2022
DOI: 10.1177/13548565221133248
|View full text |Cite
|
Sign up to set email alerts
|

Cynical technical practice: From AI to APIs

Abstract: In this article, we examine how critical thinking, methods and design are used within the tech industry, using Philip Agre’s notion of critical technical practice (CTP) to consider the rise of ‘cynical’ technical practice. Arguments by tech firms that their AI systems are ethical, contextual, situated or fair, as well as APIs that are privacy-compliant and offer greater user control, are now commonplace. Yet, these justifications routinely disguise the organisational, and economic, reasons for the development … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 56 publications
0
5
0
Order By: Relevance
“…Critique may be construed in terms of processes of problematisation: the surfacing of more fundamental tensions and issues, beyond bugs to be fixed and, amongst others industry-led incentives to solve social problems through technological innovation (cf. Hind and Seitz (2022) in this issue). Such problematisations have been proposed to lead to disenchantments and ‘awakenings’ (Malik and Malik, 2021) as well as bigger shifts in ways of thinking or relating to technologically mediated practices and systems (cf.…”
Section: Considerations: Ctp Beyond Agrementioning
confidence: 85%
See 2 more Smart Citations
“…Critique may be construed in terms of processes of problematisation: the surfacing of more fundamental tensions and issues, beyond bugs to be fixed and, amongst others industry-led incentives to solve social problems through technological innovation (cf. Hind and Seitz (2022) in this issue). Such problematisations have been proposed to lead to disenchantments and ‘awakenings’ (Malik and Malik, 2021) as well as bigger shifts in ways of thinking or relating to technologically mediated practices and systems (cf.…”
Section: Considerations: Ctp Beyond Agrementioning
confidence: 85%
“…As the current critique of the risks of and biases in AI and related firings demonstrate, with Timnit Gebru as maybe the most prominent example, 4 formulation of critique from within tech industries can be fraught, to say the least. Some of the contributors to this special issue also discuss the possibility of 'ethics washing' through particular forms of technical criticality (ab)used by tech industry players for whom the organisational bottom line is corporate interests, which may limit critical engagement related to broader societal issues (see Hind and Seitz (2022) in this issue).…”
Section: Ctps According To Indexed Researchmentioning
confidence: 99%
See 1 more Smart Citation
“…Video footage captured for the purposes of training computer vision models intended for autonomous vehicles must be painstakingly segmented and labeled; each tree, person, vehicle, light post, traffic sign and so on (Kniazieva, 2022). Although end to end (E2E) approaches are increasingly common (Hind and Seitz, 2022) -removing the need for feature segmentationthey depend on larger, more diverse, datasets in order for such models to properly recognize objects. Finding or creating such datasets, offering a form of diversity of phenomena necessary for the task at hand is equally labour-and cost-intensive.…”
Section: Synthetic Data Simulation and The Reality Gapmentioning
confidence: 99%
“…However, accounting for these potential benefits within the contexts of AI value chains enables us to identify many concomitant harms: novel insights or gains to efficiency in some parts of an AI value chain may raise new risks in others (Cobbe, Veale, & Singh, 2023;Gansky & McDonald, 2022;Widder & Nafus, 2023); contributions to SDGs or "AI for good" initiatives may only be successful relative to a narrow set of measures (Aula & Bowles, 2023;Madianou, 2021;Moore, 2019); economic prosperity or environmental benefits may be inequitably distributed across different groups, communities, or geographies. While AI systems may produce beneficial outcomes for some value chain actors, pre-existing structural injustices in the social, political, and economic contexts of AI systems and their value chains warrant an assumption that the same systems will also produce harmful outcomes for other actors, particularly those who belong to historically marginalized communities (Birhane, 2021;Hind & Seitz, 2022).…”
Section: Ai Value Chains and Benefits Of Aimentioning
confidence: 99%