Models for understanding and holding systems accountable have long rested upon ideals and logics of transparency. Being able to see a system is sometimes equated with being able to know how it works and govern it—a pattern that recurs in recent work about transparency and computational systems. But can “black boxes’ ever be opened, and if so, would that ever be sufficient? In this article, we critically interrogate the ideal of transparency, trace some of its roots in scientific and sociotechnical epistemological cultures, and present 10 limitations to its application. We specifically focus on the inadequacy of transparency for understanding and governing algorithmic systems and sketch an alternative typology of algorithmic accountability grounded in constructive engagements with the limitations of transparency ideals.
Grindr is a popular location-based social networking application for smartphones, predominantly used by gay men. This study investigates why users leave Grindr. Drawing on interviews with 16 men who stopped using Grindr, this article reports on the varied definitions of leaving, focusing on what people report leaving, how they leave and what they say leaving means to them. We argue that leaving is not a singular moment, but a process involving layered social and technical acts -that understandings of and departures from location-based media are bound up with an individual's location. Accounts of leaving Grindr destabilize normative definitions of both 'Grindr' and 'leaving', exposing a set of relational possibilities and spatial arrangements within and around which people move. We conclude with implications for the study of non-use and technological departure.
Part of understanding the meaning and power of algorithms means asking what new demands they might make of ethical frameworks, and how they might be held accountable to ethical standards. I develop a definition of networked information algorithms (NIAs) as assemblages of institutionally situated code, practices, and norms with the power to create, sustain, and signify relationships among people and data through minimally observable, semiautonomous action. Starting from Merrill's prompt to see ethics as the study of ''what we ought to do,'' I examine ethical dimensions of contemporary NIAs. Specifically, in an effort to sketch an empirically grounded, pragmatic ethics of algorithms, I trace an algorithmic assemblage's power to convene constituents, suggest actions based on perceived similarity and probability, and govern the timing and timeframes of ethical action.What new approach to media ethics might algorithms require? In comparison to concerns over how to produce or circulate media ethically, train ethical media professionals, or ethically regulating media industries, what might it mean to take an algorithmic assemblage-a mix of computational code, design assumptions, institutional contexts, folk theories, user models-with semiautonomous agency as a unit of ethical analysis?This essay is an attempt to define a networked information algorithm (NIA) and suggest three dimensions for scrutinizing its ethics: the ability to convene people by inferring associations from computational data, the power to judge similarity and suggest probable actions, and the capacity to organize time and influence when action happens. I argue that such a framework might give starting points for holding algorithmic assemblages accountable and develop this argument through critical readings of NIAs in contemporary journalism, online commerce, security and policing, and social media.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.