Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency 2021
DOI: 10.1145/3442188.3445865
|View full text |Cite
|
Sign up to set email alerts
|

Fairness Violations and Mitigation under Covariate Shift

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
60
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 57 publications
(60 citation statements)
references
References 27 publications
0
60
0
Order By: Relevance
“…Recent work has also proposed methodological advances to ensure transferability of models across settings, for example, by pre-training on large datasets from related machine learning tasks 62,63 and with the help of causal knowledge about shifts. [64][65][66] Better metrics for dataset shift can help practitioners decide whether to transfer a model to a new setting based on how large the shift is between hospitals, for example. Importantly, findings here showed that the race variable often mediated shifts in clinical variables.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Recent work has also proposed methodological advances to ensure transferability of models across settings, for example, by pre-training on large datasets from related machine learning tasks 62,63 and with the help of causal knowledge about shifts. [64][65][66] Better metrics for dataset shift can help practitioners decide whether to transfer a model to a new setting based on how large the shift is between hospitals, for example. Importantly, findings here showed that the race variable often mediated shifts in clinical variables.…”
Section: Discussionmentioning
confidence: 99%
“…We hope that current work motivates development and evaluation of such metrics on larger and more diverse populations and datasets. Recent work has also proposed methodological advances to ensure transferability of models across settings, for example, by pre-training on large datasets from related machine learning tasks (63,64) and with the help of causal knowledge about shifts (65)(66)(67). Better metrics for dataset shift can help practitioners decide whether to transfer a model to a new setting based on how large the shift is between hospitals, for example.…”
Section: Discussionmentioning
confidence: 99%
“…As mentioned in previous section, the context of fairness is domain dependent and there maybe inherent trade-off in aggregating multiple metrics in different use cases [Kleinberg et al, 2016, Singh et al, 2021.…”
Section: Desired Characteristics Of Fairness Metricsmentioning
confidence: 99%
“…In on-device ML settings, trained ML models undergo multiple post-processing steps to overcome resource constraints for on-device deployment and distribution shifts due to context heterogeneity. Some of these post-processing steps, like domain adaptation [39] and model compression [21], can be biased. Rather than looking at the compound effect of multiple algorithmic decisions, we consider the propagation of bias through the different processing stages in the on-device ML development pipeline.…”
Section: Related Workmentioning
confidence: 99%