Automated technologies populating today’s online world rely on social expectations about how “smart” they appear to be. Algorithmic processing, as well as bias and missteps in the course of their development, all come to shape a cultural realm that in turn determines what they come to be about. It is our contention that a robust analytical frame could be derived from culturally driven Science and Technology Studies while focusing on Callon’s concept of translation. Excitement and apprehensions must find a specific language to move past a state of latency. Translations are thus contextual and highly performative, transforming justifications into legitimate claims, translators into discursive entrepreneurs, and power relations into new forms of governance and governmentality. In this piece, we discuss three cases in which artificial intelligence was deciphered to the public: (i) the Montreal Declaration for a Responsible Development of Artificial Intelligence, held as a prime example of how stakeholders manage to establish the terms of the debate on ethical artificial intelligence while avoiding substantive commitment; (ii) Mark Zuckerberg’s 2018 congressional hearing, where he construed machine learning as the solution to the many problems the platform might encounter; and (iii) the normative renegotiations surrounding the gradual introduction of “killer robots” in military engagements. Of interest are not only the rational arguments put forward, but also the rhetorical maneuvers deployed. Through the examination of the ramifications of these translations, we intend to show how they are constructed in face of and in relation to forms of criticisms, thus revealing the highly cybernetic deployment of artificial intelligence technologies.
No abstract
L’annonce de la vente de l’entreprise montréalaise Element AI à Service Now inc. à la fin de 2020 a été accueillie avec stupeur par la grande majorité des acteurs de l’écosystème québécois et canadien en intelligence artificielle. Comment avait-elle pu prétendre si rapidement au statut de narval – celui d’une entreprise canadienne ayant une capitalisation de plus d’un milliard de dollars –, être encensée par l’État, les médias et les milieux économiques pour être rachetée quelques années plus tard « pour une bouchée de pain » ? Dans cet article, le cas Element AI, c’est-à-dire autant son ascension que sa chute, est présenté comme étant idéal-typique d’une « cybernétisation du pouvoir » dans laquelle la régulation se veut facilitatrice, à distance et à même de percevoir contrôle et communication comme les deux pôles d’une unique boucle de rétroaction. Si l’émergence d’Element AI est marquée par sa recherche de « supercrédibitité », de partenariats tous azimuts et de justifications jusqu’à éthiques, sa débâcle, elle, est le signe d’un désordre et d’une désynchronisation qui n’est pas allée sans réprimandes et contredits, même de la part de l’État. Ce passage de la justification à la critique est riche d’enseignement même si, ou plutôt parce qu’il pointe en direction aujourd’hui de ce qui est un vide au sein de cet écosystème et la manière dont il peine à se projeter dans un avenir même proche.
No abstract
The last years have seen a proliferation of research on the social ramifications of algorithms (Eubanks 2018; Noble 2018) and the power of algorithms was insightfully theorized (Gillespie 2016; Bucher 2018). At the same time, scholars have begun to examine the ties between algorithms and culture (Seaver 2017), describing algorithms as products of complex socio-algorithmic assemblages (Gillespie 2016, 24), with often very local socio-technical histories (Kitchin 2017). However, the spatial trajectories through which algorithms operate, and the specific sociocultural contexts in which they arise have been largely overlooked. Accordingly, research tends to focus on American companies and on the effects their algorithms have on Euro-American users, while, in fact, algorithms are being developed in various geographical locations, and they are being used in diverse socio-cultural contexts. That is, research on algorithms tends to disregard the heterogeneous contexts from which algorithms arise and the effects various cultural settings have on the production of algorithmic systems. This panel aims to fill these gaps by offering four empirical perspectives on algorithmic production in three prominent tech centers: China, Canada, and Israel. We will ask: How do cross-cultural encounters construct notions of privacy? How is algorithmic discrimination understood and acted upon in China? What symbolical and material resources were invested in making Canada’s AI hubs? And how Israeli tech companies use their algorithms to profile their Other? Hence, this panel offers to think beyond the Silicon Valley paradigm, and to aim towards a more diverse, culturally-sensitive approach to the study algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.