“…The use of algorithms to aid critical decision making processes in the government and the industry has attracted commensurate scrutiny from academia, lawmakers and social justice workers in recent times [4,7,71], because ML systems trained on a snapshot of the society has the unintended consequences of learning, propagating and amplifying historical social biases and power dynamics [5,56]. The current research landscape consists of both ML explanation methods and fairness metrics to try and uncover the problems of trained models [8,30,45,59,68], and fairness aware ML algorithms, for instance classification [31,34,37,47], regression [2,9], causal inference [43,49], word embeddings [13,14] and ranking [16,64,72].…”