Learner-centered pedagogy highlights active learning and formative feedback. Instructors often incentivize learners to engage in such formative assessment activities by crediting their completion and score in the final grade, a pedagogical practice that is very relevant to MOOCs as well. However, previous studies have shown that too many MOOC learners exploit the anonymity to abuse the formative feedback, which is critical in the learning process, to earn points without effort. Unfortunately, limiting feedback and access to decrease cheating is counter-pedagogic and reduces the openness of MOOCs. We aimed to identify and analyze a MOOC assessment strategy that balances this tension between learner-centered pedagogy, incentive design, and reliability of the assessment. In this study, we evaluated an assessment model that MITx Biology introduced in a MOOC to reduce cheating with respect to its effect on two aspects of learner behavior – the amount of cheating and learners’ engagement in formative course activities. The contribution of the paper is twofold. First, this work provides MOOC designers with an ‘analytically-verified’ MOOC assessment model to reduce cheating without compromising learner engagement in formative assessments. Second, this study provides a learning analytics methodology to approximate the effect of such an intervention.