This paper uses frame analysis to examine recent high-profile values statements endorsing ethical design for artificial intelligence and machine learning (AI/ML). Guided by insights from values in design and the sociology of business ethics, we uncover the grounding assumptions and terms of debate that make some conversations about ethical design possible while forestalling alternative visions. Vision statements for ethical AI/ML co-opt the language of some critics, folding them into a limited, technologically deterministic, expert-driven view of what ethical AI/ML means and how it might work.
The dominance of online social networking sites (SNSs) sparks questions and concerns regarding information privacy, online identity, and the complexities of social life online. Since messages created by a technology’s purveyors can play an influential role in our understanding of a technology, we argue that gaining a complete understanding of the role of social media in contemporary life must include qualitative exploration of how public figures discuss and frame these platforms. Accordingly, this article reports the results of a discourse analysis of Facebook founder and CEO Mark Zuckerberg’s public language, foregrounding the evolution of his discourse surrounding Facebook’s self-definitions, the construction of user identity, and the relationship between Facebook and its users.
Despite the participatory and democratic promises of Web 2.0, many marginalized individuals with fluid or non-normative identities continue to struggle to represent themselves online. Facebook users, in particular, are told to use “authentic identities,” an idea reinforced throughout the site’s documentation, “real name” and other policies, and in public statements by company representatives. Facebook’s conception of authenticity and real names, however, has created problems for certain users, as demonstrated by the systematic deactivation of many accounts belonging to transgender and gender variant users, drag queens, Native Americans, abuse survivors, and others. In view of the struggles of marginalized users, Facebook policy appears paradoxical: the site simultaneously demands authenticity yet proscribes certain people from authentic self-presentation. In this work, we examine Facebook’s construction of “authenticity” and show how it excludes multifaceted, fluid, or non-normative identities. Using content analysis and close reading, we analyze site documentation and data from The Zuckerberg Files (an online archive of Facebook founder and CEO Mark Zuckerberg’s public remarks) to understand the platform’s mechanisms for enforcing authenticity. We find that Facebook positions itself as a type of administrative identity registrar, raising vital questions regarding the ethics and consequences of identity enforcement online today.
Inclusion has emerged as an early cornerstone value for the emerging domain of “data ethics.” On the surface, appeals to inclusion appear to address the threat that biased data technologies making decisions or misrepresenting people in ways that reproduce longer standing patterns of oppression and violence. Far from a panacea for the threats of pervasive data collection and surveillance, however, these emerging discourses of inclusion merit critical consideration. Here, I use the lens of discursive violence to better theorize the relationship between inclusion and the violent potentials of data science and technology. In doing so, I aim to articulate the problematic and often perverse power relationships implicit in ideals of “inclusion” broadly, which—if not accompanied by dramatic upheavals in existing hierarchical power structures—too often work to diffuse the radical potential of difference and normalize otherwise oppressive structural conditions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.