Proceedings of the 3rd Clinical Natural Language Processing Workshop 2020
DOI: 10.18653/v1/2020.clinicalnlp-1.33
|View full text |Cite
|
Sign up to set email alerts
|

Exploring Text Specific and Blackbox Fairness Algorithms in Multimodal Clinical NLP

Abstract: Clinical machine learning is increasingly multimodal, collected in both structured tabular formats and unstructured forms such as free text. We propose a novel task of exploring fairness on a multimodal clinical dataset, adopting equalized odds for the downstream medical prediction tasks. To this end, we investigate a modality-agnostic fairness algorithm-equalized odds post processing-and compare it to a text-specific fairness algorithm: debiased clinical word embeddings. Despite the fact that debiased word em… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 14 publications
(11 citation statements)
references
References 27 publications
0
11
0
Order By: Relevance
“…There are three main strategies for encouraging group fairness [20]: pre-processing data to find less-biased representations [81]; enforcing fairness while training a model, typically through regularization [72,118]; and altering a model's predictions to satisfy fairness constraints after it is trained [2,24,48,83]. In this paper, we utilize the inprocessing method proposed by Zhang et al [118] for training fair blackbox models.…”
Section: Algorithmic Fairnessmentioning
confidence: 99%
“…There are three main strategies for encouraging group fairness [20]: pre-processing data to find less-biased representations [81]; enforcing fairness while training a model, typically through regularization [72,118]; and altering a model's predictions to satisfy fairness constraints after it is trained [2,24,48,83]. In this paper, we utilize the inprocessing method proposed by Zhang et al [118] for training fair blackbox models.…”
Section: Algorithmic Fairnessmentioning
confidence: 99%
“…This not only entails providing multimodal explanations underlying an AI decision and training systems through demonstrations, but also leveraging multimodal data sources (text, image, video, audio) encompassing diverse ethical perspectives in defining, designing, and developing ethical AI systems. Existing works only address parts of the proposed strategy such as through multimodal explanations [41,63], exploration of fairness related issues in multimodal settings [7,69], inference of norms from stories [15], or by using principles of imitation learning towards value alignment [61] ; these works neither leverage multimodal data sources nor incorporate diverse ethical perspectives in their design. For example, oral traditions of teaching in Indian art education are heavily informed by "Dhvani" or sound signals.…”
Section: Integrating Multimodal Data For Characterizing Ethicsmentioning
confidence: 99%
“…With clinical notes 49,50 or temporal measurements 4,51,52 or both 53 from MIMIC-III considered, fairness evaluation and bias mitigation have been studied recently for tasks such as mortality prediction 4,[49][50][51][52][53] , phenotyping 50,53 , readmission 51 , length of stay 52 , etc. To evaluate data and prediction fairness for the aforementioned healthcare tasks, attributes like ethnicity 4,49,50,52,53 , gender 50,52,53 , insurance 50,53 , age 49 and language 50 , are considered most often to split patients into different protected groups.…”
Section: Bias and Fairness In Mimic-iiimentioning
confidence: 99%
“…When making medical decisions based on text data like clinical notes, word embeddings, used as machine learning inputs, have been demonstrated to propagate unwanted relationships with regard to different genders, language speakers, ethnicities, and insurance groups 50,53 . With respect to gender and insurance type, differences in accuracy and therefore machine bias has been observed for mortality prediction 51 .…”
Section: Bias and Fairness In Mimic-iiimentioning
confidence: 99%
See 1 more Smart Citation