Given the increasing prevalence of facial analysis technology, the problem of bias in these tools is becoming an even greater source of concern. Causality has been proposed as a method to address the problem of bias, giving rise to the popularity of using counterfactuals as a bias mitigation tool. In this paper, we undertake a systematic investigation of the usage of counterfactuals to achieve both statistical and causal-based fairness in facial expression recognition. We explore bias mitigation strategies with counterfactual data augmentation at the preprocessing, in-processing, and post-processing stages as well as a stacked approach that combines all three methods. At the in-processing stage, we propose using Siamese Networks to suppress the differences between the predictions on the original and the counterfactual images. Our experimental results on RAF-DB with counterfactuals added show that: (1) The in-processing method outperforms at the pre-processing and postprocessing stages, in terms of accuracy, F1 score, statistical fairness and counterfactual fairness, and (2) stacking the pre-processing, in-processing and post-processing stages provides the best performance.
Mental health is becoming an increasingly prominent health challenge. Despite a plethora of studies analysing and mitigating bias for a variety of tasks such as face recognition and credit scoring, research on machine learning (ML) fairness for mental health has been sparse to date. In this work, we focus on gender bias in mental health and make the following contributions. First, we examine whether bias exists in existing mental health datasets and algorithms. Our experiments were conducted using Depresjon, Psykose and D-Vlog. We identify that both data and algorithmic bias exist. Second, we analyse strategies that can be deployed at the pre-processing, in-processing and post-processing stages to mitigate for bias and evaluate their effectiveness. Third, we investigate factors that impact the efficacy of existing bias mitigation strategies and outline recommendations to achieve greater gender fairness for mental health. Upon obtaining counter-intuitive results on D-Vlog dataset, we undertake further experiments and analyses, and provide practical suggestions to avoid hampering bias mitigation efforts in ML for mental health.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.