Proceedings of the 12th ACM Conference on Electronic Commerce 2011
DOI: 10.1145/1993574.1993599
|View full text |Cite
|
Sign up to set email alerts
|

Who moderates the moderators?

Abstract: A large fraction of user-generated content on the Web, such as posts or comments on popular online forums, consists of abuse or spam. Due to the volume of contributions on popular sites, a few trusted moderators cannot identify all such abusive content, so viewer ratings of contributions must be used for moderation. But not all viewers who rate content are trustworthy and accurate. What is a principled approach to assigning trust and aggregating user ratings, in order to accurately identify abusive content?In … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 98 publications
(18 citation statements)
references
References 14 publications
0
18
0
Order By: Relevance
“…Nevertheless moderator's intervention was sometimes needed when off-topic discussions on individual technical issues were causing too much distraction from the main task. Future iterations of the system may include different crowd roles, including moderators, and/or features for self-moderation [14].…”
Section: Discussionmentioning
confidence: 99%
“…Nevertheless moderator's intervention was sometimes needed when off-topic discussions on individual technical issues were causing too much distraction from the main task. Future iterations of the system may include different crowd roles, including moderators, and/or features for self-moderation [14].…”
Section: Discussionmentioning
confidence: 99%
“…Our approach also has connections to the crowdsourcing literature [17; 11], and in particular to spectral and method of moments-based approaches [38; 9; 12; 1]. In contrast, the goal of our work is to support and explore settings not covered by crowdsourcing work, such as sources with correlated outputs, the proposed multi-task supervision setting, and regimes wherein a small number of labelers (weak supervision sources) each label a large number of items (data points).…”
Section: Related Workmentioning
confidence: 99%
“…Geiger studied one collective approach to mutual moderation via Twitter blocklists, where users work together to coordinate lists of users who they all agree to block [33], a phenomenon further explored in research by Jhaver et al [51], though in a sense this is more a form of community self-moderation than platform-driven moderation. Other work has proposed methods for making user reports more effective; Ghosh, Kale, and McAfee [38] proposed a computational approach for identifying trustworthy volunteer content raters and screening out bad actors, an approach that translates well to, e.g., identifying trustworthy reports on Reddit.…”
Section: Sociotechnicalmentioning
confidence: 99%