Artificial intelligence (AI) governance is required to reap the benefits and manage the risks brought by AI systems. This means that ethical principles, such as fairness, need to be translated into practicable AI governance processes. A concise AI governance definition would allow researchers and practitioners to identify the constituent parts of the complex problem of translating AI ethics into practice. However, there have been few efforts to define AI governance thus far. To bridge this gap, this paper defines AI governance at the organizational level. Moreover, we delineate how AI governance enters into a governance landscape with numerous governance areas, such as corporate governance, information technology (IT) governance, and data governance. Therefore, we position AI governance as part of an organization’s governance structure in relation to these existing governance areas. Our definition and positioning of organizational AI governance paves the way for crafting AI governance frameworks and offers a stepping stone on the pathway toward governed AI.
Growing evidence suggests that the affordances of algorithms can reproduce socially embedded bias and discrimination, increase the information asymmetry and power imbalances in socio‑economic relations. We conceptualise these affordances in the context of socially mediated mass harms. We argue that algorithmic technologies may not alter what harms arise but, instead, affect harms qualitatively—that is, how and to what extent they emerge and on whom they fall. Using the example of three well-documented cases of algorithmic failures, we integrate the concerns identified in critical algorithm studies with the literature on social harm and zemiology. Reorienting the focus from socio‑economic to socio-econo-technological structures, we illustrate how algorithmic technologies transform the dynamics of social harm production on macro and meso levels by: (1) systematising bias and inequality; (2) accelerating harm propagation on an unprecedented scale; and (3) blurring the perception of harms.
This Article deploys cybernetic theory to argue that a novel legal impact imaginary has emerged. In this imaginary, the subjects of legal interventions are performed and enacted as cybernetic organisms, that is, as entities that process information and adapt to changes in their environment. This Article, then, argues that in this imaginary, law finds its effectiveness—not by threatening, cajoling, educating, and moralizing humans as before, but by affecting the composition of cybernetic organisms, giving rise to new kinds of legal subjects that transcend the former conceptual boundary between humans and non-humans, or persons and things. The cybernetic interventions work to change the cyborgs' behavioral responses, thus giving law a new kind modality of power. This Article develops a model for understanding cyborg regulation through case studies and argues that cyborg regulation deploys three distinct strategies. Cyborgs can be controlled through affecting the informational inputs the entities receive, through agencement practices that intervene in the material constitution of the cyborg cognitions, and, finally, by psycho-morphing humans to make them useful components of the cyborg cognitive machineries. The Article ends with a discussion of the theoretical implications of the transition to the cyborg imaginary.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.