In 2002, multimedia artist Hasan Elahi launched the project Tracking Transience, a website designed to make public his location and activity. Yet despite the complete disclosure of Elahi’s personal information in this format, Tracking Transience aims to enact a resistive posture to recent developments in digital surveillance. Rather than uphold claims to privacy, Elahi negotiates his security through the release of his personal information. Paradoxically, he voluntarily forgoes his privacy in order to feel more secure. His form of resistance then registers his project of self-surveillance as a performance of transparency. He turns the normative flow of power in digital surveillance into a new critical posture, wherein the artist becomes anonymous to surveillance systems and is able to maintain comfortable levels of security without typical recourse to privacy. And it is precisely this mode of anonymity that Elahi argues is able to produce a mode of resistance to surveillance, one in which the artist participates with digital surveillance in order to avoid it. By tracing the methodologies that generate data on Elahi’s activity, this paper will speculate on how creative interventions can effect resistive strategies against surveillance systems by moving beyond the historical limits of privacy into the outer reaches of anonymity in our contemporary age of transparency.
This article considers the medial logics of American terrorist watchlist screening in order to study the ways in which digital inequities result from specific computational parameters. Central in its analysis is Secure Flight, an automated prescreening program run by the Transportation Security Administration (TSA) that identifies lowand high-risk airline passengers through name-matching algorithms. Considering Secure Flight through the framework of biopolitics, this article examines how passenger information is aggregated, assessed and scored in order to construct racialised assemblages of passengers that reify discourses of American exceptionalism. Racialisation here is neither a consequence of big data nor a motivating force behind the production of risk-assessment programs. Both positions would maintain that discrimination is simply an effect of an information management system that considers privacy as its ultimate goal, which is easily mitigated with more accurate algorithms. Not simply emerging as an effect of discriminatory practices at airport security, racialisation formats the specific techniques embedded in terrorist watchlist matching, in particular the strategies used to transliterate names across different script systems. I argue thus that the biopolitical production of racialised assemblages forms the ground zero of Secure Flight’s computational parameters, as well as its claims to accuracy. This article concludes by proposing a move away from the call to solve digital inequities with more precise algorithms in order to carefully interrogate the forms of power complicit in the production and use of big data analytics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.