Australia is a country firmly part of the Global North, yet geographically located in the Global South. This North-in-South divide plays out internally within Australia given its status as a British settler-colonial society which continues to perpetrate imperial and colonial practices vis-à-vis the Indigenous peoples and vis-à-vis Australia’s neighboring countries in the Asia-Pacific region. This article draws on and discusses five seminal examples forming a case study on Australia to examine big data practices through the lens of Southern Theory from a criminological perspective. We argue that Australia’s use of big data cements its status as a North-in-South environment where colonial domination is continued via modern technologies to effect enduring informational imperialism and digital colonialism. We conclude by outlining some promising ways in which data practices can be decolonized through Indigenous Data Sovereignty but acknowledge these are not currently the norm; so Australia’s digital colonialism/coloniality endures for the time being.
The potential for biases being built into algorithms has been known for some time (e.g., Friedman and Nissenbaum, 1996), yet literature has only recently demonstrated the ways algorithmic profiling can result in social sorting and harm marginalised groups (e.g., Browne, 2015; Eubanks, 2018; Noble, 2018). We contend that with increased algorithmic complexity, biases will become more sophisticated and difficult to identify, control for, or contest. Our argument has four steps: first, we show how harnessing algorithms means that data gathered at a particular place and time relating to specific persons, can be used to build group models applied in different contexts to different persons. Thus, privacy and data protection rights, with their focus on individuals (Coll, 2014; Parsons, 2015), do not protect from the discriminatory potential of algorithmic profiling. Second, we explore the idea that anti-discrimination regulation may be more promising, but acknowledge limitations. Third, we argue that in order to harness anti-discrimination regulation, it needs to confront emergent forms of discrimination or risk creating new invisibilities, including invisibility from existing safeguards. Finally, we outline suggestions to address emergent forms of discrimination and exclusionary invisibilities via intersectional and post-colonial analysis.
This article explores technological sovereignty as a way to respond to anxieties of control in digital urban contexts, and argues that this may promise a more meaningful social license to operate smart cities. First, we present an overview of smart city developments with a critical focus on corporatization and platform urbanism. We critique Alphabet's Sidewalk Labs development in Toronto, which faces public backlash from the #BlockSidewalk campaign in response to concerns over not just privacy, but also lack of community consultation, the prospect of the city losing its civic ability to self‐govern, and its repossession of public land and infrastructure. Second, we explore what a more responsible smart city could look like, underpinned by technological sovereignty, which is a way to use technologies to promote individual and collective autonomy and empowerment via ownership, control, and self‐governance of data and technologies. To this end, we juxtapose the Sidewalk Labs development in Toronto with the Barcelona Digital City plan. We illustrate the merits (and limits) of technological sovereignty moving toward a fairer and more equitable digital society.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.