In undergraduate STEM education mathematics-focused courses pose a major obstacle towards degree completion for many students not majoring in the subject. This may be addressed by providing regular, individual feedback on student work, yet many, especially smaller institutions lack the respective financial resources. One of the most promising approaches to this problem is peer assessment (PA), which has indeed been shown to be more effective in facilitating student learning than assessment by teaching staff. In this quasi-experiment we explore the practical scenario of employing PA as a grading tool in the context of an exam prerequisite in an undergraduate mathematics class. Not only does the exam prerequisite incentivize students to participate, but the use of PA in this context also alleviates ethical concerns that accompany its use as a grading tool. Both groups were composed of first semester computer science students ($n_{0} = 24$, $n_{\textrm{PA}} = 26$). For the PA group the exam prerequisite consisted of weekly assignments over the course of the semester, whereas students in the control group had to pass a single written test to be admitted to the exam two weeks later. An increase of $23.1\%$ (or $8.7$ percentage points) in the exam passing rate of the PA group was observed, though plausibly participant number was too low to reach statistical significance ($p =.370$). We also found a positive correlation between scores in the PA process and exam performance ($r =.55$, $p =.007$). Student opinions on the intervention were generally favorable. In a true experiment the time spent on managing the PA process should be measured. Advice for the use of PA as an exam prerequisite is also provided here.