Applications that use games to harness human intelligence to perform various computational tasks are increasing in popularity and may be termed human computation games (HCGs). Most HCGs are collaborative in nature, requiring players to cooperate within a game to score points. Competitive versions, where players work against each other, are a more recent entrant, and have been claimed to address shortcomings of collaborative HCGs such as quality of computation. To date, however, little work has been conducted in understanding how different HCG genres influence computational performance and players' perceptions of such. In this paper we study these issues using image tagging HCGs in which users play games to generate keywords for images. Three versions were created: collaborative HCG, competitive HCG, and a control application for manual tagging.The applications were evaluated to uncover the quality of the image tags generated as well as users' perceptions. Results suggest that there is a tension between entertainment and tag quality. While participants reported liking the collaborative and competitive image tagging HCGs over the control application, those using the latter seemed to generate better quality tags. Implications of the work are discussed.