2014
DOI: 10.1136/bmjopen-2013-004007
|View full text |Cite
|
Sign up to set email alerts
|

Machine-learning prediction of cancer survival: a retrospective study using electronic administrative records and a cancer registry

Abstract: ObjectivesUsing the prediction of cancer outcome as a model, we have tested the hypothesis that through analysing routinely collected digital data contained in an electronic administrative record (EAR), using machine-learning techniques, we could enhance conventional methods in predicting clinical outcomes.SettingA regional cancer centre in Australia.ParticipantsDisease-specific data from a purpose-built cancer registry (Evaluation of Cancer Outcomes (ECO)) from 869 patients were used to predict survival at 6,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
60
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
7
2
1

Relationship

1
9

Authors

Journals

citations
Cited by 92 publications
(61 citation statements)
references
References 28 publications
1
60
0
Order By: Relevance
“…These results correspond with previous studies using EHR data to develop risk models, illustrating that EHR-based models perform better with nearer-term events. [14][15][16][17][18][19][20][21] Moreover, when comparing the "important" variables over different time horizons, previous work has similarly suggested that more "dynamic" metrics are important for nearer-term outcomes and more "stable" metrics are important for longer-term events. 17 This finding stresses the importance of machine-learning methods capable of handling large numbers of disparate predictor variables.…”
Section: Discussionmentioning
confidence: 99%
“…These results correspond with previous studies using EHR data to develop risk models, illustrating that EHR-based models perform better with nearer-term events. [14][15][16][17][18][19][20][21] Moreover, when comparing the "important" variables over different time horizons, previous work has similarly suggested that more "dynamic" metrics are important for nearer-term outcomes and more "stable" metrics are important for longer-term events. 17 This finding stresses the importance of machine-learning methods capable of handling large numbers of disparate predictor variables.…”
Section: Discussionmentioning
confidence: 99%
“…This trend of AI models outperforming physician estimates is continued in that Gupta et al . took a pre‐existing cancer registry and electronic administrative records (not medical records) and found that it provided better outcome prediction than previously used models or a clinician panel.…”
Section: And Medicinementioning
confidence: 99%
“…The average performance (the ability of a model to separate patients with different outcomes) was calculated over the five training and testing repetitions per multiple imputed dataset for all three models and prediction periods and subsequently pooled. Performance was assessed using receiver operating characteristic (ROC) curves [4,11,27]. ROC curves are made by plotting the rate of false positives (1 -specificity) on the x-axis and the rate of true positives (sensitivity) on the y-axis for all threshold values.…”
Section: Discussionmentioning
confidence: 99%