Crowdsourcing contests have become widely adopted for idea generation and problem-solving in various companies in different industries. The success of crowdsourcing depends on the sustained participation and quality-submissions of the individuals. Yet, little is known about the factors that influence individuals' continued participation in these contests. We address this issue, by conducting an empirical study using data from an online crowdsourcing contest platform, Kaggle, which delivers data science and machine learning solutions and models to its clients. The findings show that the community activities and team activities do not contribute to motivating the continued participation, but tenure does significantly affect the continued participation. We also found statistically significant effects of amount of prize, number of competitions, previous team performance, and competition duration on individuals sustained participation in crowdsourcing contests. This research contributes to the literature by identifying the factors influencing individuals' sustained participation in crowdsourcing contests.
PurposeTo obtain optimal deliverables, more and more crowdsourcing platforms allow contest teams to submit tentative solutions and update scores/rankings on public leaderboards. Such feedback-seeking behavior for progress benchmarking pertains to the team representation activity of boundary spanning. The literature on virtual team performance primarily focuses on team characteristics, among which network closure is generally considered a positive factor. This study further examines how boundary spanning helps mitigate the negative impact of network closure.Design/methodology/approachThis study collected data of 9,793 teams in 246 contests from Kaggle.com. Negative binomial regression modeling and linear regression modeling are employed to investigate the relationships among network closure, boundary spanning and team performance in crowdsourcing contests.FindingsWhereas network closure turns out to be a negative asset for virtual teams to seek platform feedback, boundary spanning mitigates its impact on team performance. On top of such a partial mediation, boundary spanning experience and previous contest performance serve as potential moderators.Practical implicationsThe findings offer helpful implications for researchers and practitioners on how to break network closure and encourage boundary spanning with the establishment of facilitating structures in crowdsourcing contests.Originality/valueThe study advances the understanding of theoretical relationships among network closure, boundary spanning and team performance in crowdsourcing contests.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.