While some argue that the rise of software automation threatens workers with obsolescence, others assert that new complementarities between humans and software systems are likely to emerge. This study draws on 19 months of participant-observation research at a software firm to investigate how relations between workers and technology evolved over three phases of the company’s development. The author finds two forms of human–software complementarity: computational labor that supports or stands in for software algorithms and emotional labor aimed at helping users adapt to software systems. Instead of perfecting software algorithms that would progressively push people out of the production process, managers continually reconfigured assemblages of software and human helpers, developing new forms of organization with a dynamic relation to technology. The findings suggest how the dynamism of the organizations in which software algorithms are produced and implemented will contribute to labor’s enduring relevance in the digital age.
This article outlines a research agenda for a sociology of artificial intelligence (AI). The authors review two areas in which sociological theories and methods have made significant contributions to the study of inequalities and AI: (1) the politics of algorithms, data, and code and (2) the social shaping of AI in practice. The authors contrast sociological approaches that emphasize intersectional inequalities and social structure with other disciplines’ approaches to the social dimensions of AI, which often have a thin understanding of the social and emphasize individual-level interventions. This scoping article invites sociologists to use the discipline’s theoretical and methodological tools to analyze when and how inequalities are made more durable by AI systems. Sociologists have an ability to identify how inequalities are embedded in all aspects of society and to point toward avenues for structural social change. Therefore, sociologists should play a leading role in the imagining and shaping of AI futures.
How do digital platforms govern their users? Existing studies, with their focus on impersonal and procedural modes of governance, have largely neglected to examine the human labor through which platform companies attempt to elicit the consent of their users. This study describes the relationship labor that is systematically excised from many platforms' accounts of what they do and missing from much of the scholarship on platform governance. Relationship labor is carried out by agents of platform companies who engage in interpersonal communications with a platform's users in an effort to align diverse users' activities and preferences with the company's interests. The authors draw on ethnographic research conducted at AllDone (a for-profit startup that built an online market for local services) and edX (a non-profit startup that partnered with institutions to offer Massive Open Online Courses). The findings leverage variation in organizational contexts to elaborate the common practices and divergent strategies of relationship labor deployed by each platform. Both platforms relied on relationship workers to engage in account management practices aimed at addressing the particular concerns of individual users through interpersonal communications. Relationship workers in each setting also engaged in community management practices that facilitated contact and collaboration among users in pursuit of shared goals. However, our findings show that the relative frequency of relationship workers' use of account management and community management practices varies with organizational conditions. This difference in strategies also corresponded to different ways of valuing relationship workers and incorporating them into organizational processes. The article demonstrates how variation in organizational context accounts for divergent strategies for governing user participation in digital platforms, and for the particular processes through which governance is accomplished and contested.
Some researchers have warned that advances in artificial intelligence will increasingly allow employers to substitute human workers with software and robotic systems, heralding an impending wave of technological unemployment. By attending to the particular contexts in which new technologies are developed and implemented, others have revealed that there is nothing inevitable about the future of work, and that there is instead the potential for a diversity of models for organizing the relationship between work and artificial intelligence. Although these social constructivist approaches allow researchers to identify sources of contingency in technological outcomes, they are less useful in explaining how aims and outcomes can converge across diverse settings. In this essay, I make the case that researchers of work and technology should endeavor to link the outcomes of artificial intelligence systems not only to their immediate environments but also to less visible—but nevertheless deeply influential—structural features of societies. I demonstrate the utility of this approach by elaborating on how finance capital structures technology choices in the workplace. I argue that investigating how the structure of ownership influences a firm’s technology choices can open our eyes to alternative models and politics of technological development, improving our understanding of how to make innovation work for everyone instead of allowing the benefits generated by technological change to be hoarded by a select few.
This article details the principles and practices animating an “ethnographic” method of teaching social theory. As opposed to the traditional “survey” approach that aims to introduce students to the historical breadth of social thought, the primary objective of teaching ethnographically is to cultivate students as participant observers who interpret, adjudicate between, and practice social theories in their everyday lives. Three pedagogical principles are central to this approach, the first laying the groundwork for the two that follow: (1) intensive engagement with manageable portions of text, (2) conversations among theorists, and (3) dialogues between theory and lived experience. Drawing on examples from our experiences as graduate student instructors for a two-semester theory sequence, we offer practical guideposts to sociology instructors interested in integrating “living theory” into their own curricula by clarifying how each principle is put into action in course assignments, classroom discussions and activities, and evaluations of student learning. We conclude by encouraging sociology departments and instructors to consider the potential benefits and drawbacks of offering social theory courses built around in-depth readings of and conversations between social theorists and the social world.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.