Persistent inequalities and injustices are a blight on modern liberal societies. Examples abound, from the gender gap in pay to sentencing disparities between Black, Hispanic, and White defendants to allocation disparities in medical resources between Black and White patients. One cause of these and other inequalities is implicit social biases. In a process thought to be outside conscious control, human cognition is assumed to make associations between social groups and attributions such as "nurturing," "lazy," or "uneducated." Such associations can result in explicit unequal treatment. In theory, one way to circumvent implicit but, of course, also explicit biases is to delegate important decisions—for instance, on allocating benefits, resources, or opportunities—to algorithms, which are assumed to be free of human biases. However, evidence shows that algorithms can perpetuate and even amplify existing inequalities and injustices. We discuss how a philosophical thought experiment, Rawls's "veil of ignorance," and a psychological phenomenon, "deliberate ignorance" (the choice to not know), can shield people, institutions, and algorithms from biases. We explore the advantages and ways of blinding human and artificial decision makers to information that could result in biases and consider the practical problems of successfully blinding algorithms.