“…If bias is observed in the AI model as described above, then the researcher should return to the modeldevelopment stage and apply model in-processing or postprocessing bias-mitigation strategies (Berk et al, 2017;Gorrostieta et al, 2019;Kamishima et al, 2011;Woodworth et al, 2017;Zafar et al, 2017aZafar et al, , 2017b. For example, researchers could transform data, inject or recover noise (Calmon et al, 2017;Zhang et al, 2018), relabel the data to ensure an equal proportion of positive predictions for the sensitive group and its counterparts (Hardt et al, 2016;Luong et al, 2011), reweigh labels before training (Feldman et al, 2015;Kamiran & Calders, 2012;Luong et al, 2011), control target labels via a latent output (Kehrenberg et al, 2020), apply fairness regularization, penalize the mutual information between the sensitive feature and the classifier predictions (Kamishima et al, 2012), or add constraints to the loss functions that require satisfying a proxy for equalized odds or disparate impact (Woodworth et al, 2017;Zafar et al, 2017aZafar et al, , 2017b. See Figure 2 for a general framework for applying bias-mitigation techniques.…”