The use of opaque machine learning algorithms is often justified by their accuracy. For example, IBM has advertised its algorithms as being able to predict when workers will quit with 95% accuracy, an EU research project on lie detection in border control has reported 75% accuracy, and researchers have claimed to be able to deduce sexual orientation with 91% accuracy from face images. Such performance numbers are, on the one hand, used to make sense of the functioning of opaque algorithms and promise to quantify the quality of algorithmic predictions. On the other hand, they are also performative, rhetorical, and meant to convince others of the ability of algorithms to know the world and its future objectively, making calculated, partial visions appear certain. This duality marks a conflict of interest when the actors who conduct an evaluation also profit from positive outcomes. Building on work in the sociology of testing and agnotology, I discuss seven ways how the construction of high accuracy claims also involves the production of ignorance. I argue that this ignorance should be understood as productive and strategic as it is imbued with epistemological authority by making uncertain matters seem certain in ways that benefit some groups over others. Several examples illustrate how tech companies increasingly strategically produce ignorance reminiscent of tactics used by controversial companies with a high concentration of market power such as big oil or tobacco. My analysis deconstructs claims of certainty by highlighting the politics and contingencies of testing used to justify the adoption of algorithms. I further argue that current evaluation practices in ML are prone to producing problematic forms of ignorance, like misinformation, and reinforcing structural inequalities due to how human judgment and power structures are invisibilized, narrow, oversimplified metrics overused, and pernicious incentive structures encourage overstatements enabled by flexibility in testing. I provide recommendations on how to deal with and rethink incentive structures, testing practices, and the communication and study of accuracy with the goal of opening possibilities, making contingencies more visible, and enabling the imagination of different futures.