2022
DOI: 10.1016/j.cmpb.2021.106603
|View full text |Cite
|
Sign up to set email alerts
|

Using Machine Learning to Identify Intravenous Contrast Phases on Computed Tomography

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
7
2

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(10 citation statements)
references
References 27 publications
1
7
2
Order By: Relevance
“…This performance is consistent with previous studies exploring 2D architectures of four phase classification. 5,7,12 Similar to prior studies, 4,7 we found that only 15% of our dataset had informative phase descriptions in the DICOM header. This highlights the need for automated methods to annotate the phase of CT scans to aid in high quality dataset curation which is essential for deep learning algorithmic development.…”
Section: Discussionsupporting
confidence: 83%
See 1 more Smart Citation
“…This performance is consistent with previous studies exploring 2D architectures of four phase classification. 5,7,12 Similar to prior studies, 4,7 we found that only 15% of our dataset had informative phase descriptions in the DICOM header. This highlights the need for automated methods to annotate the phase of CT scans to aid in high quality dataset curation which is essential for deep learning algorithmic development.…”
Section: Discussionsupporting
confidence: 83%
“…3,4 It was expanded to predict four contrast phases including non-contrast, arterial, portal venous, and delayed later on. [5][6][7][8] This work extends the task to five phase classification through the addition of nephrographic phase. Our contributions are three-fold.…”
Section: Introductionmentioning
confidence: 99%
“…Our F1-score (0.9852) was better than the one reported by Zhou et al 17 (0.977) and Dao et al 19 (0.9209). The achieved accuracy (0.9936) was also higher than the one reported by Tang et al 18 (0.93) and Muhamedrahimov et al 20 (0.933).…”
Section: Foldcontrasting
confidence: 51%
“…The evaluation of their algorithm on external datasets reported F1 scores of 0.7679 and 0.8694 on two manually annotated online databases. Muhamedrahimov et al 20 used CE-CT images and the time from injection as the ground truth to train a regression machine learning model to predict the time from injection. Then they used these times to classify images and reported an overall accuracy of 0.933 in classification.…”
Section: Introductionmentioning
confidence: 99%
“…This loss was necessary because the standard categorical cross-validation counts all the classification error as equal while in our task the errors between classes far from each other is more relevant and should therefore be more penalizing to the loss. We expect that the standard categorical cross-entropy can be directly used without loss of performance in case of a significant increase of the training set available (Muhamedrahimov et al, 2021 ; i.e., several thousand cases). We also showed that using a warm-up learning rate schedule we could stabilize the training in order to obtain models performance uniform across different folds.…”
Section: Discussionmentioning
confidence: 99%