Proceedings of the 2021 International Conference on Multimodal Interaction 2021
DOI: 10.1145/3462244.3479897
|View full text |Cite
|
Sign up to set email alerts
|

Bias and Fairness in Multimodal Machine Learning: A Case Study of Automated Video Interviews

Abstract: We introduce the psychometric concepts of bias and fairness in a multimodal machine learning context assessing individuals' hireability from prerecorded video interviews. We collected interviews from 733 participants and hireability ratings from a panel of trained annotators in a simulated hiring study, and then trained interpretable machine learning models on verbal, paraverbal, and visual features extracted from the videos to investigate unimodal versus multimodal bias and fairness. Our results demonstrate t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
28
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 30 publications
(29 citation statements)
references
References 34 publications
0
28
0
Order By: Relevance
“…For example, subgroup norming of features in ML models (i.e., z -scoring ML features separately for each subgroup) is common in computer-science applications. Past research using gender norming of ML features in the context of automated interviews did not find that it substantially reduced MLMB (Booth et al, 2021). More importantly, when ML models are used in the context of selection, one should recognize that subgroup norming (on race, sex, etc.)…”
Section: Discussionmentioning
confidence: 96%
See 2 more Smart Citations
“…For example, subgroup norming of features in ML models (i.e., z -scoring ML features separately for each subgroup) is common in computer-science applications. Past research using gender norming of ML features in the context of automated interviews did not find that it substantially reduced MLMB (Booth et al, 2021). More importantly, when ML models are used in the context of selection, one should recognize that subgroup norming (on race, sex, etc.)…”
Section: Discussionmentioning
confidence: 96%
“…In addition, it is also possible for researchers to apply different types of transformations to features (e.g., log transformation, normalization) that are subgroup-specific, or the same type of transformation but with a different mathematical function between subgroups. For example, when researchers normalize features for men and women separately before model training, if the training data for men and women have different means and variances on the features, the transformations are mathematically different between gender subgroups (Booth et al, 2021). Different subgroup transformations are regarded as a type of algorithm-training bias within the MLMB framework.…”
Section: Data Bias Source 1: Ground Truth In Training Datamentioning
confidence: 99%
See 1 more Smart Citation
“…By mechanizing human characteristics these systems can obfuscate significant uncertainty and result in harmful biases. AI-based hiring systems that claim to glean information about candidates from audio and video have been shown to increase bias in outcome decisions and may present untenable trade-offs between bias mitigation and prediction accuracy [178]. AI systems marketed as making predictions based on facial expressions often generate decisions based on biased experimental design premises [172] or spurious patterns learned by the system (e.g., shortcut learning).…”
Section: Spurious Correlationsmentioning
confidence: 99%
“…For example, the visual quest answering (VQA) task [6] combines computer vision (CV) and natural language processing (NLP), and the model can answer relevant questions based on medical images and clinical notes [60]. However, multi-modal models face more serious bias and fairness issues than unimodal models, despite improvements in performance [12]. Only a few works have focused on fairness issues in multi-modality in healthcare systems [17].…”
Section: Fairness Of Multi-modality Model For Healthcarementioning
confidence: 99%