2016
DOI: 10.1613/jair.4940
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Contract Design for Crowdsourcing Markets: Bandit Algorithms for Repeated Principal-Agent Problems

Abstract: Crowdsourcing markets have emerged as a popular platform for matching available workers with tasks to complete. The payment for a particular task is typically set by the task's requester, and may be adjusted based on the quality of the completed work, for example, through the use of "bonus" payments. In this paper, we study the requester's problem of dynamically adjusting quality-contingent payments for tasks. We consider a multi-round version of the well-known principal-agent model, whereby in each round a wo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
65
0
2

Year Published

2016
2016
2022
2022

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 68 publications
(67 citation statements)
references
References 49 publications
0
65
0
2
Order By: Relevance
“…In Section 7, we propose a variant of this worker model that is in line with our experimental observations. The results of Ho et al [16] still apply when our worker model is used in place of theirs.…”
Section: Introductionmentioning
confidence: 70%
See 2 more Smart Citations
“…In Section 7, we propose a variant of this worker model that is in line with our experimental observations. The results of Ho et al [16] still apply when our worker model is used in place of theirs.…”
Section: Introductionmentioning
confidence: 70%
“…As an example, consider Ho et al [16], a recent theoretical paper on the optimization of PBPs. While that paper posits the standard principal-agent model, all results carry over to our model.…”
Section: Comparison With Principal-agent Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…The current literature on active crowd-labeling is mainly focused on binary annotation problems (Sheng, Provost, & Ipeirotis, 2008;Donmez & Carbonell, 2008a, 2008bDonmez, Carbonell, & Schneider, 2009;Hsueh, Melville, & Sindhwani, 2009;Welinder & Perona, 2010;Yan, Rosales, Fung, & Dy, 2011;Gao, Liu, Ooi, Wang, & Chen, 2013;Lin, Mausam, & Weld, 2016;Tran-Thanh, Venanzi, Rogers, & Jennings, 2013;Tran-Thanh, Huynh, Rosenfeld, Ramchurn, & Jennings, 2014;Fang, Yin, & Tao, 2014;Raykar & Agrawal, 2014;Mozafari, Sarkar, Franklin, Jordan, & Madden, 2014;Nguyen, Wallace, & Lease, 2015;Zhang, Wen, Tian, Gan, & Wang, 2015;Zhuang & Young, 2015;Zhu, Xu, & Yan, 2015;Ho, Jabbari, & Vaughan, 2013;Ho, Slivkins, & Vaughan, 2016;Khetan & Oh, 2016). We briefly survey the main tenets below.…”
Section: Active Crowd-labeling For Binary Annotation Problemsmentioning
confidence: 99%
“…Their method requires the use of gold standard labels for assessing annotator quality and uses weighted majority voting for inferring the consensus. Ho et al (2016) treat the payment problem for crowdsourcing markets as a multi-armed bandit problem, where each arm represents the contract between a task and an annotator. They propose a method called 'Agnostic Zooming' for selecting the most beneficial contract and study dynamic task pricing.…”
Section: Active Crowd-labeling For Binary Annotation Problemsmentioning
confidence: 99%