Crowdsourcing offers fast and cost-effective access to human labor for business projects as well as to research participants for scientific projects. Due to the loose links between crowdsourcing employers and workers, quality control is even more important than in the off-line realm. We developed and validated the web-delivered attention test attentiveWeb in two versions (1) to come up with advance filters to identify workers who produce low-quality results and (2) to gauge the attention of workers who pass the advance filter. We apply attentiveWeb in three parallel user studies: one in the crowdsource Microworkers ( N = 539), another one in Figure Eight ( N = 333), and a third one in the online panel WiSoPanel ( N = 1,837). The user studies confirm that it is useful to apply advance filtering to screen out poor workers. We propose an easily computed filter based on objective user behavior involving attentiveWeb. With regard to attention, despite the more severe advance filtering with Microworkers, their attention was lowest, followed by workers from Figure Eight, and it was highest in WiSoPanel. The platform differences in attention were not entirely explained by known differences—demographic and others—among the users of the three platforms. The attention test attentiveWeb has high Cronbach’s α and split-half reliability. The first version of attentiveWeb predicted performance of the same crowdworkers in the second version of attentiveWeb 2 years later. We release attentiveWeb for assessing crowdworkers’ attention into the research community and the wider public domain. The attention test attentiveWeb is open-source and can be used for free.
Online labor markets experienced a rapid growth in recent years. They allow for long-distance transactions and offer workers access to a potentially 'global' pool of labor demand. As such, they bear the potential to act as a substitute for shrinking local income opportunities. Using detailed U.S. data from a large online labor platform for microtasks, we study how local unemployment affects participation and work intensity online. We find that, at the extensive margin, an increase in commuting zone level unemployment is associated with more individuals joining the platform and becoming active in fulfilling tasks. At the intensive margin, our results show that with higher unemployment rates, online labor supply becomes more elastic. These results are driven by a decrease of the reservation wage during standard working hours. Finally, the effects are transient and do not translate to a permanent increase in platform participation by incumbent users. Our findings highlight that many workers consider online labor markets as a substitute to offline work for generating income, especially in periods of low local labor demand. However, the evidence also suggests that, despite their potential to attract workers, online markets for microtasks are currently not viable as a long run alternative for most workers.
Crowdsourcing platforms provide an easy and scalable access to human workforce that can, e.g., provide subjective judgements, tagging information, or even generate knowledge. In conjunction with machine clouds offering scalable access to computing resources, these human cloud providers offer numerous possibilities for creating new applications which would not have been possible a few years ago. However, in order to build sustainable services on top of this inter-cloud environment, scalability considerations have to be made. While cloud computing systems are already well studied in terms of dimensioning of the hardware resources, there still exists little work on the appropriate scaling of crowdsourcing platforms. This is especially challenging, as the complex interaction between all involved stakeholders, platform providers, workers and employers has to be considered.The contribution of this work is threefold. First, we develop a model for common crowdsourcing platforms and implement the model using a simulative approach, which is validated with a comparison to an analytic M [X] /M/c 1 system. In a second step, we evaluate inter-arrival times as well as campaign size distributions based on a dataset of a large commercial crowdsourcing platform to derive realistic model parameters and illustrate the differences to the analytic approximation. Finally, we perform a parameter study using the simulation model to derive guidelines for dimensioning crowdsourcing platforms, while considering relevant parameters for the involved stakeholders, i.e., the delay before work on a task begins and the work load of the workers.
Crowdsourcing allows collecting subjective user ratings promptly and on a large scale. This enables, for example, building subjective models for the perception of technical systems in the field of quality of experience research or researching cultural aspects of the aesthetic appeal. In addition to research in technical domains, crowdsourced subjective ratings also gain more and more relevance in medical research, like the evaluation of aesthetic surgeries. In line with this, we illustrate a novel use-case for crowdsourced subjective ratings of deformational cranial asymmetries of newborns. Deformational cranial asymmetries are deformations of a newborn's head that might, e.g., result from resting on the same spot for a longer time. Even if there are objective metrics to assess the deformation objectively, there is only a little understanding of how those values match the severity of the deformational cranial asymmetries as subjectively perceived by humans. This paper starts filling this gap by illustrating a crowdsourcingbased solution to collect a large set of subjective ratings on examples of deformational cranial asymmetries from different groups that might have a different perception of those deformations. In particular, we consider pediatricians, parents of children with cranial deformation, and naive crowdworkers. For those groups, we further analyze the consistency of their subjective ratings, the differences of the ratings between the groups, and the effects of the study design.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.