2014
DOI: 10.1177/1461444814543163
|View full text |Cite
|
Sign up to set email alerts
|

What is a flag for? Social media reporting tools and the vocabulary of complaint

Abstract: The flag is now a common mechanism for reporting offensive content to an online platform, and is used widely across most popular social media sites. It serves both as a solution to the problem of curating massive collections of user-generated content and as a rhetorical justification for platform owners when they decide to remove content. Flags are becoming a ubiquitous mechanism of governance—yet their meaning is anything but straightforward. In practice, the interactions between users, flags, algorithms, con… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
215
0
4

Year Published

2017
2017
2022
2022

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 314 publications
(220 citation statements)
references
References 19 publications
1
215
0
4
Order By: Relevance
“…Not only did participants identify distinctly different management strategies as being suitable for different platforms, but also for different populations, communities, and types of trolls. Specifically, the many levels of 'platforms' [47,48], in addition to other nuances of individual interactions, generate similarities and differences between acts of trolling. In addition to technical and institutional similarities at high levels of platform, which similarly enable trolling, visibility of individual cases, as in a mass-trolling event, versus specificity of small subcommunities differentiate.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Not only did participants identify distinctly different management strategies as being suitable for different platforms, but also for different populations, communities, and types of trolls. Specifically, the many levels of 'platforms' [47,48], in addition to other nuances of individual interactions, generate similarities and differences between acts of trolling. In addition to technical and institutional similarities at high levels of platform, which similarly enable trolling, visibility of individual cases, as in a mass-trolling event, versus specificity of small subcommunities differentiate.…”
Section: Discussionmentioning
confidence: 99%
“…52; 53]. Additional creative recommendations were not mentioned by participants, including: creative censorship techniques, such as hell-banning or shadowbanning; employing humor to disarm them; developing barriers to social participation; tracking trolls; flagging systems; and automated interventions [5,47,53,54]. It is important to further evaluate these strategies, given that they conform to participants' expressed standards for management in particular cases, yet may be inappropriate in others.…”
Section: Discussionmentioning
confidence: 99%
“…Once we reached reasonable reliability between coders on our modified scale, we proceeded to have human coders, recruited and trained through MTurk, rate 900 tweets (see [14]) from our #GamerGate dataset. Each tweet was coded only once.…”
Section: Human Codingmentioning
confidence: 99%
“…That harassing and abusive messages sent over the internet can reach so many people in such a short time makes managing such harassment an onerous task [11]. Manual reporting and commercial content moderation (CCM) [see 34] are currently the most common approaches to combatting harassment [14]. In this model, it is either incumbent upon the victim or a thirdparty observer of harassment to report and/or manage the harassing content.…”
Section: Introductionmentioning
confidence: 99%
“…Las reglas de uso de Twitter prohíben hacer tweets automáticamente con un hashtag. Pero esta práctica, como cualquier otra que infrinja las reglas, debe ser denunciada por los usuarios para que Twitter actúe al respecto (Crawford y Gillespie, 2016).…”
Section: Twitteresfera: Espacio De Contestación Polarizado Y Automatiunclassified