“…Recently, effort has been made by Yang et al [136] to reduce these biases by removing 2,702 synsets (out of 2,800 total) from the person subtree used in ImageNet. Motivated by the importance of building socially responsible models, we follow recent works [51] to systematically study the fairness, harms and biases of our models trained using self-supervised learning on random group of internet images.…”