2021
DOI: 10.1093/jamiaopen/ooab077
|View full text |Cite
|
Sign up to set email alerts
|

Quantifying representativeness in randomized clinical trials using machine learning fairness metrics

Abstract: Objective We help identify subpopulations underrepresented in randomized clinical trials (RCTs) cohorts with respect to national, community-based or health system target populations by formulating population representativeness of RCTs as a machine learning (ML) fairness problem, deriving new representation metrics, and deploying them in easy-to-understand interactive visualization tools. Materials and Methods We represent RCT… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
22
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(22 citation statements)
references
References 41 publications
0
22
0
Order By: Relevance
“…Our previously work on RCT representativeness metrics [14], derived from machine learning fairness metrics, is used to evaluate enrollment representativeness. These metrics have a lower threshold τ l and an upper threshold τ u .…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…Our previously work on RCT representativeness metrics [14], derived from machine learning fairness metrics, is used to evaluate enrollment representativeness. These metrics have a lower threshold τ l and an upper threshold τ u .…”
Section: Methodsmentioning
confidence: 99%
“…To understand the proposed approach, we first introduce some terms from population representativeness. Representativeness metrics: Quantitative measures for disparities between a target population and an observed sample [14]. Ideally, subgroup sizes are equal to their proportions in target population.…”
Section: A Enrollment Planning and Monitoringmentioning
confidence: 99%
See 2 more Smart Citations
“…For features represented as metadata (e.g., patient age, slide scanner, or diagnosis), bias can be detected by comparing the feature distributions in the test dataset and the target population using summary statistics (e.g., via mean and standard deviation) or dedicated fairness metrics [102,103]. Detection of bias in an entire test dataset requires a good estimate of the feature distribution of the target population of images.…”
Section: Bias Detectionmentioning
confidence: 99%