2014
DOI: 10.1016/j.artint.2014.04.005
|View full text |Cite
|
Sign up to set email alerts
|

Efficient crowdsourcing of unknown experts using bounded multi-armed bandits

Abstract: Increasingly, organisations flexibly outsource work on a temporary basis to a global audience of workers. This so-called crowdsourcing has been applied successfully to a range of tasks, from translating text and annotating images, to collecting information during crisis situations and hiring skilled workers to build complex software. While traditionally these tasks have been small and could be completed by non-professionals, organisations are now starting to crowdsource larger, more complex tasks to experts in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
128
0
2

Year Published

2014
2014
2022
2022

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 129 publications
(131 citation statements)
references
References 27 publications
1
128
0
2
Order By: Relevance
“…Another approach determines the price automatically or dynamically via designing some efficient mechanisms. Typical examples include a multi-armed bandits based pricing mechanism [24], and auction based pricing mechanisms [25]. However these approaches do not solve the challenge of free-riding of workers, or denial of payment of requesters.…”
Section: Related Workmentioning
confidence: 99%
“…Another approach determines the price automatically or dynamically via designing some efficient mechanisms. Typical examples include a multi-armed bandits based pricing mechanism [24], and auction based pricing mechanisms [25]. However these approaches do not solve the challenge of free-riding of workers, or denial of payment of requesters.…”
Section: Related Workmentioning
confidence: 99%
“…Es soll jener Spielautomat ermittelt werden, der über die beste Auszahlungsrate verfügt, aber gleichzeitig sollen die Gewinne maximiert werden. Dabei werden zum einen Automaten berücksichtigt, die in der Vergangenheit hohe Gewinne erbracht haben, und zum anderen neue oder scheinbar schlechtere Automaten, die möglicherweise noch bessere Gewinne erbringen könnten [13]. Im Prinzip modellieren MABs also Agenten, die gleichzeitig versuchen, neue Kenntnisse zu erwerben (,,Exploration") und ihre Entscheidungen auf der Grundlage der vorhandenen Kenntnisse zu optimieren (,,Verwertung" [9].…”
Section: { Interactive Machine Learning (Iml)unclassified
“…Tran-Thanh et al [258] proposed a bounded multi-armed bandit model for expert crowdsourcing. Specifically, the proposed ε-first algorithm works in two stages: First, it explores the estimation of workers' quality by using part of the total budget; Second, it exploits the estimates of workers' quality to maximise the overall utility with the remaining budget.…”
Section: Planning and Schedulingmentioning
confidence: 99%