Crowdsourcing systems like Yahoo!Answers, Amazon Mechanical Turk, and Google Helpouts, etc., have seen an increasing prevalence in the past few years. The participation of users, high quality solutions, and a fair rating system are critical to the revenue of a crowdsourcing system. In this paper, we design a class of simple but effective incentive mechanisms to attract users participating, and providing high quality solutions. Our incentive mechanism consists of a task bundling scheme and a rating system, and pay workers according to solution ratings from requesters. We also propose a probabilistic model to capture various human factors like biases in rating, and we quantify its impact on the incentive mechanism, which is shown to be highly robust. We develop a model to characterize the design space of a class of commonly used rating systems -threshold based rating systems. We quantify the impact of such rating systems and the bundling scheme on the incentive mechanism.