Traditionally, the assumption has been that academic misconduct emerges primarily in response to "publish or perish" pressures. Robert Slutsky, a UC San Diego cardiologist famously caught in 1986 reporting imaginary experiments, was, at one point, putting out one article every ten days (Lock and Wells, 2001). "Publish or perish," however, is no longer the sole incentive for misconduct. New practices are emerging that are not limited to the production of fraudulent publications but are aimed instead at enhancing, often in unethical or fraudulent ways, the evaluation of their importance or "impact" (Biagioli, 2016). "Publish or perish" is merging with "impact or perish." 1 This is related to but different from the predictable gaming of academic performance indicators one would expect from Goodhart's law: as soon as an indicator becomes a target, gaming ensues, which forecloses its ability to function as a good indicator. 2 That may take the form, for instance, of massaging the definition of what counts as a "successful student" in metrics about schools' performance, or of what counts as a "peer-reviewed" paper in faculty evaluation protocols. It could also involve aligning one's practices to metrics-relevant parameters, like capping classes' enrollment to nineteen students to have them fit the US News and World Report's definition of "small class," which is rewarded in its ranking of universities. But we now find authors and editors who move beyond this kind of gaming to create (rather than tweak) metricenhancing evidence, such as citations to one's work or to the work published in a given journal so as to boost its impact factor. We argue that the growing reliance on institutional metrics of evaluation does not just provide incentives for these kinds of manipulations, but also creates their conditions of possibility. They would not have come into being were it not for the new metrics-based "audit culture" of academia (