This article examines the phenomenon of Instagram influencer "engagement pods" as an emergent form of resistance that responds to the reconfigured working conditions of platformized cultural production. Engagement pods are grassroots communities that agree to mutually like, comment on, share, or otherwise engage with each other's posts, no matter the content, to game Instagram's algorithm into prioritizing the participants' content and show it to a broader audience. I argue that engagement pods represent a response to the material conditions of platformized cultural production on Instagram, where proprietary curation algorithms wrest knowledge and control of the labor process from producers. Cooperative algorithm hacking of this sort, although quite distinct from traditional organizing strategies, responds to the coercive force of the "threat of invisibility" that necessitates constant data production. They represent a collective attempt to exert some control over their "conditions of presence-to-others" and, in so doing, combat precarity and protect wages in the field. In a post-industrial economy where traditional models of labor organizing have struggled to address the conditions of platformized cultural work, I argue that the unusual phenomenon of Instagram engagement pods represents an organic form of worker resistance that responds to the unique conditions of these workers.
This article examines how imagined audiences and impression management strategies shape COVID-19 health information sharing practices on social media and considers the implications of this for combatting the spread of misinformation online. In an interview study with 27 Canadian adults, participants were shown two infographics about masks and vaccines produced by the World Health Organization (WHO) and asked whether or not they would share these on social media. We find that interviewees’ willingness to share the WHO infographics is negotiated against their mental perception of the online audience, which is conceptualized in three distinct ways. First, interviewees who would not share the infographics frequently describe a self-similar audience of peers that are “in the know” about COVID-19; second, those who might share the infographics conjure a specific and contextual audience who “needs” the information; and finally, those who said they would share the infographics most frequently conjure an abstract audience of “the public” or “my community” to explain that decision. Implications of these sharing behaviors for combatting the spread of misinformation are discussed.
In 2014, at the height of gamergate hostilities, a blockbot was developed and circulated within the gaming community that allowed subscribers to automatically block upwards of 8,000 Twitter accounts. "Ggautoblocker" as it was called, was designed to insulate subscribers' Twitter feeds from hurtful, sexist, and in some cases deeply disturbing comments. In doing so it cast a wide net and became a source of considerable criticism from many in the industry and games community. During this time, the International Game Developers Association (IGDA) 2015 Video Game Developer Satisfaction Survey was circulating, resulting in a host of comments on the blockbot from workers in the industry. In this paper we analyze these responses, which constitute some of the first empirical data on a public response to the use of autoblocking technology, to consider the broader implications of the algorithmic structuring of the online public sphere. First, we emphasize the important role that ggautoblocker, and similar autoblocking tools, play in creating space for marginalized voices online. Then, we turn to our findings, and argue that the overwhelmingly negative response to ggautoblocker reflects underlying anxieties about fragmenting control over the structure of the online public sphere and online public life. In our discussion, we reflect upon what the negative responses suggest about normative expectations of participation in the online public sphere, and how this contrasts with the realities of algorithmically structured online spaces.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.