Crowdsourcing has gained much attention in practice over the last years. Numerous companies have drawn on this concept for performing different tasks and value creation activities. Nevertheless, despite its popularity, there is still comparatively little well-founded knowledge on crowdsourcing, particularly with regard to crowdsourcing intermediaries. Crowdsourcing intermediaries play a key role in crowdsourcing initiatives as they assure the connection between the crowdsourcing companies and the crowd. However, the issue of how crowdsourcing intermediaries manage crowdsourcing initiatives and the associated challenges has not been addresses by research yet. We address these issues by conducting a case study with a German start-up crowdsourcing intermediary called testCloud that offers software testing services for companies intending to partly or fully outsource their testing activities to a certain crowd. The case study shows that testCloud faces three main challenges, these are: managing the process, managing the crowd and managing the technology. For each dimension, we outline mechanisms that testCloud applies for facing the challenges associated with crowdsourcing projects.
Crowdsourced tasks are very diverse – and so are platform types. They fall into four categories, each demanding different governance mechanisms. The main goal of microtasking crowdsourcing platforms is the scalable and time-efficient batch processing of highly repetitive tasks. Crowdsourcing platforms for information pooling aggregate contributions such as votes, opinions, assessments and forecasts through approaches such as averaging, summation, or visualization. Broadcast search platforms collect contributions to solve tasks in order to gain alternative insights and solutions from people outside the organization, and are particularly suited for solving challenging technical, analytical, scientific, or creative problems. Open collaboration platforms invite contributors to team up to jointly solve complex problems in cases where solutions require the integration of distributed knowledge and the skills of many contributors. Companies establishing crowdsourcing platforms of any type should continuously monitor and adjust their governance mechanisms. Quality and quantity of contributions, project runtime, or the effort for conducting the crowdsourcing project may be good starting points.
To profit from crowdsourcing, organizations can engage in four different approaches: microtasking, information pooling, broadcast search, and open collaboration. In this paper, we present 21 governance mechanisms that can help organizations manage their crowdsourcing platforms. We investigate the effectiveness of these governance mechanisms in 19 case studies and recommend specific configurations of these mechanisms for each of the four crowdsourcing approaches. Also, we provide guidance to organizations that host a crowdsourcing platform by providing recommendations for implementing governance mechanisms into their platforms and building up governance capabilities for crowdsourcing.
The last decade has witnessed the proliferation of crowdsourcing in various academic domains including strategic management, computer science, or IS research. Numerous companies have drawn on this concept and leveraged the wisdom of crowds for various purposes. However, not all crowdsourcing projects turn out to be a striking success. Hence, research and practice are on the lookout for the main factors influencing the success of crowdsourcing projects. In this context, proper governance is considered as the key to success by several researchers. However, little is known about governance mechanisms and their impact on project outcomes. We address this issue by means of a multiple case analysis in the scope of which we examine crowdsourcing projects on collaboration-based and/or competition-based crowdsourcing systems. Our initial study reveals that task definition mechanisms and quality assurance mechanisms have the highest impact on the success of crowdsourcing projects, whereas task allocation mechanisms are less decisive.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.