Computer scientists, and artificial intelligence researchers in particular, have a predisposition for adopting precise, fixed definitions to serve as classifiers (Agre, 1997; Broussard, 2018). But classification is an enactment of power; it orders human interaction in ways that produce advantage or suffering (Bowker & Star, 1999). In so doing, it obscures the messiness of human life, masking the work of the people involved in training machine learning systems, and hiding the uneven distribution of its impacts on communities (Taylor, 2018; Gray, 2019; Roberts, 2019). Feminist scholars, and particularly feminist scholars of color, have made powerful critiques of the ways in which artificial intelligence systems formalize, classify, and amplify historical forms of discrimination and act to reify and amplify existing forms of social inequality (Eubanks, 2017; Benjamin, 2019; Noble, 2018). In response, the machine learning community has begun to address claims of algorithmic bias under the rubric of fairness, accountability, and transparency. But in doing so, it has largely dealt with these issues in familiar terms, using statistical methods aimed at achieving parity and deploying fairness ‘toolkits’. Yet actually existing inequality is reflected and amplified in algorithmic systems in ways that exceed the capacity of statistical methods alone. This article outlines a feminist critique of extant methods of dealing with algorithmic discrimination. I outline the ways in which gender discrimination and erasure are built into the field of AI at a foundational level; the product of a community that largely represents a small, privileged, and male segment of the global population (Author, 2019). In so doing, I illustrate how a situated mode of inquiry enables us to more closely examine a feedback loop between discriminatory workplaces and discriminatory systems.