In recent years, a discourse of ‘ethical artificial intelligence’ has emerged and gained international traction in response to widely publicised AI failures. In Australia, the discourse around ethical AI does not accord with the reality of AI deployment in the public sector. Drawing on institutional ethnographic approaches, this paper describes the misalignments between how technology is described in government documentation, and how it is deployed in social service delivery. We argue that the propagation of ethical principles legitimates established new public management strategies, and pre-empts questions regarding the efficacy of AI development; instead positioning implementation as inevitable and, provided an ethical framework is adopted, laudable. The ethical AI discourse acknowledges, and ostensibly seeks to move past, widely reported administrative failures involving new technologies. In actuality, this discourse works to make AI implementation a reality, ethical or not.
Since 2016, welfare recipients in Australia have been subject to the Online Compliance Intervention (OCI), implemented through the national income support agency, Centrelink. This is a big data initiative, matching reported income to tax records to recoup welfare overpayments. The OCI proved controversial, notably for a “reverse onus,” requiring that claimants disprove debts, and for data-matching design leading frequently to incorrect debts. As algorithmic governance, the OCI directs attention to the chronopolitics of contemporary welfare bureaucracies. It outsources labor previously conducted by Centrelink to clients, compelling them to submit documentation lest debts be raised against them. It imposes an active wait against a deadline on those issued debt notifications. Belying government rhetoric about the accessibility of the digital state, the OCI demonstrates how automation exacerbates punitive welfare agendas, through transfers of time, money, and labor whose combined effects are such as to occupy the time of people experiencing poverty.
This article develops and troubles existing approaches to visual self-representation in social media, questioning the naturalized roles of faces and bodies in mediated self-representation. We argue that selfrepresentation in digital communication should not be treated as synonymous with selfies, and that selfies themselves should not be reductively equated with performances of embodiment. We do this through discussing "not-selfies": visual self-representation consisting of images that do not feature the likenesses of the people who share them, but instead show objects, animals, fictional characters, or other things, as in the practices of #EDC ("everyday carry") and #GPOY ("gratuitous picture of yourself ") on platforms such as Tumblr, Facebook, Instagram, and reddit. We present an account of self-representation as an emergent, recognizable, intertextual genre, and show that #EDC and #GPOY practices are best conceptualized as instances of self-representation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.