Recent interest in ethical AI has brought a slew of values, including fairness, into conversations about technology design. Research in the area of algorithmic fairness tends to be rooted in questions of distribution that can be subject to precise formalism and technical implementation. We seek to expand this conversation to include the experiences of people subject to algorithmic classification and decision-making. By examining tweets about the "Twitter algorithm" we consider the wide range of concerns and desires Twitter users express. We find a concern with fairness (narrowly construed) is present, particularly in the ways users complain that the platform enacts a political bias against conservatives. However, we find another important category of concern, evident in attempts to exert control over the algorithm. Twitter users who seek control do so for a variety of reasons, many well justified. We argue for the need for better and clearer definitions of what constitutes legitimate and illegitimate control over algorithmic processes and to consider support for users who wish to enact their own collective choices. CCS Concepts: • Human-centered computing → HCI theory, concepts and models; Empirical studies in collaborative and social computing.
In this moment of rising nationalism worldwide, governments, civil society groups, transnational companies, and web users all complain of increasing regional fragmentation online. While prior work in this area has primarily focused on issues of government censorship and regulatory compliance, we use an inductive and qualitative approach to examine targeted blocking by corporate entities of entire regions motivated by concerns about fraud, abuse, and theft. Through participant-observation at relevant events and intensive interviews with experts, we document the quest by professionals tasked with preserving online security to use new machine-learning based techniques to develop a “fairer” system to determine patterns of “good” and “bad” usage. However, we argue that without understanding the systematic social and political conditions that produce differential behaviors online, these systems may continue to embed unequal treatments, and troublingly may further disguise such discrimination behind more complex and less transparent automated assessment. In order to support this claim, we analyze how current forms of regional blocking incentivize users in blocked regions to behave in ways that are commonly flagged as problematic by dominant security and identification systems. To realize truly global, non-Eurocentric cybersecurity techniques would mean incorporating the ecosystems of service utilization developed by marginalized users rather than reasserting norms of an imagined (Western) user that casts aberrations as suspect.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.