2019
DOI: 10.1145/3310227
|View full text |Cite
|
Sign up to set email alerts
|

Multi-task Crowdsourcing via an Optimization Framework

Abstract: The unprecedented amounts of data have catalyzed the trend of combining human insights with machine learning techniques, which facilitate the use of crowdsourcing to enlist label information both effectively and efficiently. One crucial challenge in crowdsourcing is the diverse worker quality, which determines the accuracy of the label information provided by such workers. Motivated by the observations that same set of tasks are typically labeled by the same set of workers, we studied their behaviors across mu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2
2

Relationship

2
5

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 50 publications
0
3
0
Order By: Relevance
“…Dynamic transfer learning (Hoffman et al, 2014 ; Bitarafan et al, 2016 ; Mancini et al, 2019 ) refers to the knowledge transfer from a static source task to a dynamic target task. Compared to standard transfer learning on the static source and target tasks (Pan and Yang, 2009 ; Zhou et al, 2017 , 2019a , b ; Tripuraneni et al, 2020 ; Wu and He, 2021 ), dynamic transfer learning is a more challenging but realistic problem setting due to its time evolving task relatedness. More recently, various dynamic transfer learning frameworks are built from the following aspects: self-training (Kumar et al, 2020 ; Chen and Chao, 2021 ; Wang et al, 2022 ), incremental distribution alignment (Bobu et al, 2018 ; Wulfmeier et al, 2018 ; Wang H. et al, 2020 ; Wu and He, 2020 , 2022a ), meta-learning (Liu et al, 2020 ; Wu and He, 2022b ), contrastive learning (Tang et al, 2021 ; Taufique et al, 2022 ), etc.…”
Section: Related Workmentioning
confidence: 99%
“…Dynamic transfer learning (Hoffman et al, 2014 ; Bitarafan et al, 2016 ; Mancini et al, 2019 ) refers to the knowledge transfer from a static source task to a dynamic target task. Compared to standard transfer learning on the static source and target tasks (Pan and Yang, 2009 ; Zhou et al, 2017 , 2019a , b ; Tripuraneni et al, 2020 ; Wu and He, 2021 ), dynamic transfer learning is a more challenging but realistic problem setting due to its time evolving task relatedness. More recently, various dynamic transfer learning frameworks are built from the following aspects: self-training (Kumar et al, 2020 ; Chen and Chao, 2021 ; Wang et al, 2022 ), incremental distribution alignment (Bobu et al, 2018 ; Wulfmeier et al, 2018 ; Wang H. et al, 2020 ; Wu and He, 2020 , 2022a ), meta-learning (Liu et al, 2020 ; Wu and He, 2022b ), contrastive learning (Tang et al, 2021 ; Taufique et al, 2022 ), etc.…”
Section: Related Workmentioning
confidence: 99%
“…As for the subspace learning, the authors of [14] proposed a deep multiview robust representation learning algorithm based on auto-encoder to learn a shared representation from multi-view observations; [11] proposed online Bayesian subspace multi-view learning by modeling the variational approximate posterior inferred from the past samples; [51] proposed M2VW for multi-view multi-worker learning problem by leveraging the structural information between multiple views and multiple workers; [30] proposed CR-GAN method to learn a complete representation for multi-view generations in the adversarial setting by the collaboration of two learning pathways in a parameter-sharing manner. Different from [30,[41][42][43][44][50][51][52][53], in this paper, we focus on multi-view classification problem and aim to extract both the shared information and the view-specific information in the adversarial setting, and the view consistency constraint with label information is utilized to further regularize the generated representation in order to improve the predictive performance. Recently, more and more studies on model explanation [9,15,22,25,27,47] reveal a surge of research interest in the model interpretation.…”
Section: Multi-view Learning and Interpretable Learningmentioning
confidence: 99%
“…Various task assignment strategies have been proposed. They focus on different aspects, such as individual worker's reliability and intention, worker's contribution to ground truth, task difficulty, and so on [3,6,26,27,34,44,47,54]. These strategies generally fall into three categories: task-centered, worker-centered, and both task-and worker-centered.…”
Section: Introductionmentioning
confidence: 99%