As of 2016, one in two American adults could be found in at least one American law enforcement face recognition network (Garvie, Bedoya & Frankle, 2016). Racial bias in facial recognition technology is an important site of study as technology is largely conceived by public and state actors as neutral and democratic in nature, exempt from the biases and prejudices of human life (Noble, 2018). This study will trace the ways in which Amazon’s responses to claims of racial bias in Rekognition and FRT general descriptions allow race and existing relations of power to manifest and persist. This study employs a critical discourse analysis to argue that Amazon works to obscure racial bias in both development and application of FRTs in law enforcement. Amazon also enables what I refer to as discourses of racial neutrality which, allows Amazon to deem any racially biased FRT outcomes as a "glitch in the system" that has nothing to do with race, despite decades of evidence proving otherwise.