2013
DOI: 10.1527/tjsai.28.243
|View full text |Cite
|
Sign up to set email alerts
|

Learning from Crowds and Experts

Abstract: SummaryCrowdsourcing services are often used to collect a large amount of labeled data for machine learning. Although they provide us an easy way to get labels at very low cost in a short period, they have serious limitations. One of them is the variable quality of the crowd-generated data. There have been many attempts to increase the reliability of crowdgenerated data and the quality of classifiers obtained from such data. However, in these problem settings, relatively few researchers have tried using expert… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
22
0

Year Published

2013
2013
2017
2017

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(22 citation statements)
references
References 1 publication
0
22
0
Order By: Relevance
“…Our approach aims at guiding an expert when validating input from crowd workers, which is different to other approaches for crowdsourcing that include experts, such as [17,23,24]. In particular, Karger et al [24] rely on experts that know the reliability of crowd workers, a premise that is not realistic in the general setting for crowdsourcing explored in this work, to prove the optimality of their approach.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Our approach aims at guiding an expert when validating input from crowd workers, which is different to other approaches for crowdsourcing that include experts, such as [17,23,24]. In particular, Karger et al [24] rely on experts that know the reliability of crowd workers, a premise that is not realistic in the general setting for crowdsourcing explored in this work, to prove the optimality of their approach.…”
Section: Related Workmentioning
confidence: 99%
“…Other work focuses on a related, but fundamentally different problem. The techniques presented in [17,23] target the identification of correct labels for new objects based on the labels for known objects, whereas we aim at validation, i.e., finding the correct labels for known objects. Truth Finding.…”
Section: Related Workmentioning
confidence: 99%
“…Kajino et al [9] addressed this problem by extending some existing models straightforwardly. Wauthier and Jordan [18] also used some expert labels.…”
Section: Related Workmentioning
confidence: 99%
“…A large α will cause the prior distribution over w to peak steeply on the mean, thus the affect of absorbing one expert label will Set μ = μpost, Σ = Σpost; 8. end for 9. Output: Personal classifier ensemble F, mean μ and covariance matrix Σ; be relatively small, leading to a final classifier depending heavily on the mean of prior, which is the simple combination of personal classifiers.…”
Section: Combination Of Evidence From Crowds and Expertsmentioning
confidence: 99%
“…This trigged a new research subject on modeling the low-quality data in the crowdsourcing scenario. As Sheng et al [17] have shown that obtaining more labels is helpful even when the label quality is low, models have been proposed to infer the ground truth [11,12] or learn a classifier [16,10] from large but unreliable crowdsourcing datasets.…”
Section: Introductionmentioning
confidence: 99%