Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval 2017
DOI: 10.1145/3077136.3080666
|View full text |Cite
|
Sign up to set email alerts
|

Autonomous Crowdsourcing through Human-Machine Collaborative Learning

Abstract: In this paper, we introduce a general iterative human-machine collaborative method for training crowdsource workers: a classi er (i.e., the machine) selects the highest quality examples for training crowdsource workers (i.e., the humans). en, the la er annotate the lower quality examples such that the classi er can be re-trained with more accurate examples. is process can be iterated several times. We tested our approach on two di erent tasks, Relation Extraction and Community estion Answering, which are also … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 13 publications
(8 citation statements)
references
References 15 publications
0
8
0
Order By: Relevance
“…It is expected that perceptual learning occurs in microtasks with feedbacks. Abad et al showed that rule-based feedback given to workers who provided incorrect answers is effective for training workers (Abad, Nabi, and Moschitti 2017). Our question is whether this happens even with a simple form of feedback.…”
Section: Related Workmentioning
confidence: 92%
“…It is expected that perceptual learning occurs in microtasks with feedbacks. Abad et al showed that rule-based feedback given to workers who provided incorrect answers is effective for training workers (Abad, Nabi, and Moschitti 2017). Our question is whether this happens even with a simple form of feedback.…”
Section: Related Workmentioning
confidence: 92%
“…It is expected that perceptual learning occurs in microtasks with feedbacks. (Abad et al, 2017) showed that rule-based feedback given to workers who provided incorrect answers is useful in training. Our question is whether this happens even with a simple form of feedback.…”
Section: Related Workmentioning
confidence: 99%
“…Research in crowdsourcing has focused on several different issues: aggregating labels from multiple assessors to improve the quality of the gathered assessments, by using unsupervised [Bashir et al, 2013;Hosseini et al, 2012], supervised [Pillai et al, 2013;Raykar and Yu, 2012;Raykar et al, 2010], and hybrid [Harris and Srinivasan, 2013] approaches; behavioural aspects [Kazai et al, 2012b]; proper and careful design of Human Intelligent Tasks (HITs) [Alonso, 2013;Grady and Lease, 2010;Ipeirotis and Gabrilovich, 2014;Kazai et al, 2011], also using gamification to improve quality [Eickhoff et al, 2012] and game theory to increase user engagement [Moshfeghi et al, 2016]; human-machine collaborative methods for training crowdsource workers [Abad, 2017;Abad et al, 2017]; and, routing tasks to proper assessors [Jung and Lease, 2015;Law et al, 2011].…”
Section: Crowdsourcing For Ground-truth Creationmentioning
confidence: 99%