Problem definition: We study the contest duration and the award scheme of an innovation contest where an organizer elicits solutions to an innovation-related problem from a group of agents. Academic/practical relevance: Our interviews with practitioners at crowdsourcing platforms have revealed that the duration of a contest is an important operational decision. Yet, the theoretical literature has long overlooked this decision. Also, the literature fails to adequately explain why giving multiple unequal awards is so common in crowdsourcing platforms. We aim to fill these gaps between the theory and practice. We generate insights that seem consistent with both practice and empirical evidence. Methodology: We use a game-theoretic model where the organizer decides on the contest duration and the award scheme while each agent decides on her participation and determines her effort over the contest duration by considering potential changes in her productivity over time. The quality of an agent’s solution improves with her effort, but it is also subject to an output uncertainty. Results: We show that the optimal contest duration increases as the relative impact of the agent uncertainty on her output increases, and it decreases if the agent productivity increases over time. We characterize an optimal award scheme and show that giving multiple (almost always) unequal awards is optimal when the organizer’s urgency in obtaining solutions is below a certain threshold. We also show that this threshold is larger when the agent productivity increases over time. Finally, consistent with empirical findings, we show that there is a positive correlation between the optimal contest duration and the optimal total award. Managerial implications: Our results suggest that the optimal contest duration increases with the novelty or sophistication of solutions that the organizer seeks, and it decreases when the organizer can offer support tools that can increase the agent productivity over time. These insights and their drivers seem consistent with practice. Our findings also suggest that giving multiple unequal awards is advisable for an organizer who has low urgency in obtaining solutions. Finally, giving multiple awards goes hand in hand with offering support tools that increase the agent productivity over time. These results help explain why many contests on crowdsourcing platforms give multiple unequal awards.