2019
DOI: 10.1016/j.cogsys.2019.04.004
|View full text |Cite
|
Sign up to set email alerts
|

Modelling Cognitive Bias in Crowdsourcing Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 24 publications
(16 citation statements)
references
References 26 publications
0
16
0
Order By: Relevance
“…In Experiment Set D, the highest performance was achieved by the SVM classifier for the BCE-CE input. These results once again indicate that, even though the selfreported confidence values are not particularly helpful when used within the traditional voting methods context (Li and Varshney, 2017;Saab et al, 2019)-as can also be seen by the performance of the CWMV algorithm in this study-incorporating them into an ML classifier can help attain better performance, specifically higher F 1 -scores for highly imbalanced datasets.…”
Section: Performance Of Aggregation Methods On Imbalanced Datasetsmentioning
confidence: 59%
See 1 more Smart Citation
“…In Experiment Set D, the highest performance was achieved by the SVM classifier for the BCE-CE input. These results once again indicate that, even though the selfreported confidence values are not particularly helpful when used within the traditional voting methods context (Li and Varshney, 2017;Saab et al, 2019)-as can also be seen by the performance of the CWMV algorithm in this study-incorporating them into an ML classifier can help attain better performance, specifically higher F 1 -scores for highly imbalanced datasets.…”
Section: Performance Of Aggregation Methods On Imbalanced Datasetsmentioning
confidence: 59%
“…Although subjective confidence values can be a valid predictor of accuracy in some cases (Matoulkova, 2017 ; Görzen et al, 2019 ), in many others they may degrade performance owing to cognitive biases that prevent a realistic assessment of one's abilities (Saab et al, 2019 ). Another natural approach is to weigh responses based on some form of worker reliability.…”
Section: Literature Reviewmentioning
confidence: 99%
“…self-efficacy, effort, and performance in crowdsourcing teams 1 1 0 [12] Cooperation or competition-When do people contribute more? A field experiment on gamification of crowdsourcing 1 1 0 [11] Crowdsourcing: A taxonomy and systematic mapping study 1 1 0 [55] Crowdsourcing contests 1 1 0 [21] Crowdsourcing not all sourced by the crowd: An observation on the behavior of Wikipedia participants 1 1 0 [56] Efficient crowdsourcing of unknown experts using bounded multiarmed bandits 1 1 1 [57] Hybrid crowd-based decision support in business processes 1 1 1 [58] Improving accuracy and lowering cost in crowdsourcing through an unsupervised expertise estimation approach 1 0 1 [59] Incentivizing social media users for mobile crowdsourcing 1 1 1 [60] Information technology (IT)-enabled crowdsourcing: A conceptual framework 1 1 1 [61] Inspiring crowdsourcing communities to create novel solutions: Competition design and the mediating role of trust 1 1 0 [62] Mobile crowd sensing-Taxonomy, applications, challenges, and solutions 0 1 1 [63] Modeling cognitive bias in crowdsourcing systems 1 1 0 [64] Open or proprietary? Choosing the right crowdsourcing platform for innovation 1 1 0 [23] Privacy-preserving QoI-aware participant coordination for mobile crowdsourcing 0 1 1 [2] Real-time crowdsourcing with payment of idle workers in the retainer model 1 1 0 [65] SenseChain: A blockchain-based crowdsensing framework for multiple requesters and multiple workers 0 1 1 [14] e wisdom of crowds: e potential of online communities as a tool for data analysis 1 1 0 [30] Toward collaborative software engineering leveraging the crowd 1 1 0 [8] Trait motivations of crowdsourcing and task choice: A distal-proximal perspective 1 1 0 [9] Trust-based privacy-aware participant selection in social participatory sensing 0 1 1 8 Scientific Programming in gaining the rewards that are associated with tasks and thus do not work sincerely, and this is a negative factor of the crowd [29].…”
Section: Significant Features Of Crowd In Crowdsourcing (Rq1)mentioning
confidence: 99%
“…Trainees may accomplish tasks [3,41] in the crowdsourcing activity. e crowd may be autonomous [34], fast [2] unique [69], appropriate [3], right [28,79], reliable/efficient [2,25,72,73,80], loyal [70], truthful [36], trustworthy [29,42,44,50,63] who complete assigned tasks sincerely. Workers are coordinative, adaptable [16] (as they can change themselves with the change in the work environment), "Energetic" having energy [10,17], capable [1,42,60] of performing tasks, and are "creative" as they creatively [67,68,78] perform different types of work.…”
Section: Significant Features Of Crowd In Crowdsourcing (Rq1)mentioning
confidence: 99%
See 1 more Smart Citation