A typical crowdsourcing software development (CSD) marketplace consists of a list of software tasks as service demands and a pool of freelancer developers as service suppliers. Highly dynamic and competitive CSD marketplaces may result in task failure due to unforeseen risks, such as increased competition over shared worker supply, or uncertainty associated with workers' experience and skills, and so on. To improve CSD effectiveness, it is essential to better understand and plan with respect to dynamic worker characteristics and risks associated with CSD processes. In this paper, we present a hybrid simulation model, CrowdSim, to forecast crowdsourcing task failure risk in competitive CSD platforms. CrowdSim is composed of three layered components: the macro-level reflects the overall crowdsourcing platform based on system dynamics, the meso-level represents the task life cycle based on discrete event simulation, and the micro-level models the crowd workers' decision-making processes based on agent-based simulation. CrowdSim is evaluated through three CSD decision scenarios to demonstrate its effectiveness, using a real-world historical dataset, and the results demonstrate CrowdSim's potential in empowering crowdsourcing managers to explore crowdsourcing outcomes with respect to different task scheduling options.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.