“…Gender and racial bias have also been identified within widely-deployed NLP systems, for tasks including toxicity detection (Sap et al, 2019 ), sentiment analysis (Kiritchenko and Mohammad, 2018 ), coreference resolution (Rudinger et al, 2018 ), language identification (Blodgett and O'Connor, 2017 ), and in many other areas (Sun et al, 2019 ). Given the biases captured, reproduced, and perpetuated in NLP systems, there is a growing interest in mitigating subjective biases (Sun et al, 2019 ), with approaches including modifying embedding spaces (Bolukbasi et al, 2016 ; Manzini et al, 2019 ), augmenting datasets (Zhao et al, 2018 ), and adapting natural language generation methods to “neutralize” text (Pryzant et al, 2019 ).…”