2021
DOI: 10.1158/1557-3265.adi21-po-074
|View full text |Cite
|
Sign up to set email alerts
|

Abstract PO-074: The impact of phenotypic bias in the generalizability of deep learning models in non-small cell lung cancer

Abstract: Although deep learning analysis of diagnostic imaging has shown increasing effectiveness in modeling non-small cell lung cancer (NSCLC) outcomes, a minority of proposed deep learning algorithms have been externally validated. Given a majority of these models are built on single institutional datasets, their generalizability across the entire population remains understudied. Moreover, the effect of biases that exist among institutional training dataset on overall generalizability of deep learning prognostic mod… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…For instance, Khor et al [ 139 ] used a data set with racial demographics of 53% non-Hispanic White, 22% Hispanic, and 13% Black or African American to develop a recurrence risk prediction model for adults with prostate cancer. Even with the explicit inclusion of race, they noted that the model had “worse performance in minority subgroups compared to NHW [non-Hispanic White].” Conversely, others argued that bias in training data sets of AI algorithms may not always result in decreased generalizability; for example, Gilson et al [ 140 ] suggested that biased gender representation in training data sets did not lead to decreased generalizability in an algorithm to predict survival in non–small cell lung cancer.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…For instance, Khor et al [ 139 ] used a data set with racial demographics of 53% non-Hispanic White, 22% Hispanic, and 13% Black or African American to develop a recurrence risk prediction model for adults with prostate cancer. Even with the explicit inclusion of race, they noted that the model had “worse performance in minority subgroups compared to NHW [non-Hispanic White].” Conversely, others argued that bias in training data sets of AI algorithms may not always result in decreased generalizability; for example, Gilson et al [ 140 ] suggested that biased gender representation in training data sets did not lead to decreased generalizability in an algorithm to predict survival in non–small cell lung cancer.…”
Section: Resultsmentioning
confidence: 99%
“…Despite the pressing nature of these concerns, the paucity of studies on biased AI algorithms in our search was surprising. Many AI applications identified in our study were trained on selecting data sets from single institutions, creating a high risk of bias, which should be a pressing concern, given that algorithmic bias can exacerbate health inequities [ 140 , 186 ]. A prominent cause of bias is the lack of consideration of the different contexts in which an algorithm is developed and subsequently deployed.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation