2016
DOI: 10.1613/jair.5175
|View full text |Cite
|
Sign up to set email alerts
|

Time-Sensitive Bayesian Information Aggregation for Crowdsourcing Systems

Abstract: Many aspects of the design of efficient crowdsourcing processes, such as defining worker’s bonuses, fair prices and time limits of the tasks, involve knowledge of the likely duration of the task at hand. In this work we introduce a new time–sensitive Bayesian aggregation method that simultaneously estimates a task’s duration and obtains reliable aggregations of crowdsourced judgments. Our method, called BCCTime, uses latent variables to represent the uncertainty about the workers’ completion time, the tasks’ d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 23 publications
(16 citation statements)
references
References 22 publications
0
16
0
Order By: Relevance
“…We expand this previous work by detailing the relationships between several annotator models and extending them to sequential classification. Here we focus on the core annotator representation, rather than extensions for clustering annotators (Venanzi et al, 2014;Moreno et al, 2015), modeling their dynamics (Simpson et al, 2013), adapting to task difficulty (Whitehill et al, 2009;Bachrach et al, 2012), or time spent by annotators (Venanzi et al, 2016).…”
Section: Introductionmentioning
confidence: 99%
“…We expand this previous work by detailing the relationships between several annotator models and extending them to sequential classification. Here we focus on the core annotator representation, rather than extensions for clustering annotators (Venanzi et al, 2014;Moreno et al, 2015), modeling their dynamics (Simpson et al, 2013), adapting to task difficulty (Whitehill et al, 2009;Bachrach et al, 2012), or time spent by annotators (Venanzi et al, 2016).…”
Section: Introductionmentioning
confidence: 99%
“…Cloud resource, though developed to possess unique competencies, is neither rare nor inimitable (Mitra et al, 2017). The Internet of Things and crowdsourcing, on the other hand, are highly reliant on external sources of information (Santaro et al, 2017;Venanzi et al, 2016). Meanwhile, big data will also concede on this dependence, acknowledging that it's roles will continue to morph as a long-term investment (Sedera et al, 2016).…”
Section: Resultsmentioning
confidence: 99%
“…A relatively smaller portion of the existing work in the active crowd-labeling literature concentrates on categorical annotations (Welinder & Perona, 2010;Yan, Rosales, Fung, & Dy, 2011;Mozafari, Sarkar, Franklin, Jordan, & Madden, 2014;Zhu, Xu, & Yan, 2015;Kamar, Hacker, & Horvitz, 2012;Kamar, Kapoor, & Horvitz, 2013Venanzi, Guiver, Kohli, & Jennings, 2016). These methods may also be adapted for binary annotations by considering only two categories.…”
Section: Active Crowd-labeling For Categorical Annotation Problemsmentioning
confidence: 99%
“…Kamar et al (2015) focus on the problem of rectifying task-related bias of annotators and show that active learning with expert annotators can be used for alleviating bias. Venanzi et al (2016) use a time-sensitive Bayesian aggregation method to estimate the labeling duration and annotator profile in crowdsourcing systems. They detect bots, spammers or lazy annotators from the duration of their labeling process (either too short or too long).…”
Section: Active Crowd-labeling For Categorical Annotation Problemsmentioning
confidence: 99%