2022
DOI: 10.1038/s41598-022-12410-2
|View full text |Cite
|
Sign up to set email alerts
|

Deep learning supports the differentiation of alcoholic and other-than-alcoholic cirrhosis based on MRI

Abstract: Although CT and MRI are standard procedures in cirrhosis diagnosis, differentiation of etiology based on imaging is not established. This proof-of-concept study explores the potential of deep learning (DL) to support imaging-based differentiation of the etiology of liver cirrhosis. This retrospective, monocentric study included 465 patients with confirmed diagnosis of (a) alcoholic (n = 221) and (b) other-than-alcoholic (n = 244) cirrhosis. Standard T2-weighted single-slice images at the caudate lobe level wer… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 17 publications
(6 citation statements)
references
References 23 publications
0
6
0
Order By: Relevance
“…When fine-tuning the models for text classification, we applied the following concepts. As proposed in previous studies, we fine-tuned all pre-trained models for text classification in two steps: First, frozen pre-trained language model parameters were used to adapt the new classification head and then all parameters were trained, but with layer-specific learning rates with maximum values increasing linearly from 10 −9 to 10 −6 from the first to the last layer [ 18 – 20 ]. Since the threshold for binarization of the predictions after sigmoid activation is not intrinsically set in multi-label classification, class-specific thresholds were determined by identifying the thresholds with the highest F1-scores on the training data [ 21 ].…”
Section: Methodsmentioning
confidence: 99%
“…When fine-tuning the models for text classification, we applied the following concepts. As proposed in previous studies, we fine-tuned all pre-trained models for text classification in two steps: First, frozen pre-trained language model parameters were used to adapt the new classification head and then all parameters were trained, but with layer-specific learning rates with maximum values increasing linearly from 10 −9 to 10 −6 from the first to the last layer [ 18 – 20 ]. Since the threshold for binarization of the predictions after sigmoid activation is not intrinsically set in multi-label classification, class-specific thresholds were determined by identifying the thresholds with the highest F1-scores on the training data [ 21 ].…”
Section: Methodsmentioning
confidence: 99%
“…Luetkens et al used ResNet-50 and DenseNet-121 for the differentiation of alcoholic and other-than-alcoholic cirrhosis based on MRI. ResNet50 achieved the best results (ACC 0.75, AUC 0.82), however, the performance was not significantly higher compared to Densenet121 [ 39 ]. Remedios et al provided an ablation study to compare convolutional neural networks for detecting large-vessel occlusion on computed tomography angiography in 300 patients.…”
Section: Discussionmentioning
confidence: 99%
“…Binary cross entropy loss, AdamW optimizer, a one cycle learning rate schedule with a maximum learning rate of 0.01, a weight decay of 0.01, and a batch size of 128 was used for training [ 14 ]. While fine-tuning the M S/G model on gold labels after training with silver labels, the maximum learning rate was reduced by a factor of 10 −1 per dense block from the last to the first block, as commonly done when applying pre-trained weights [ 15 , 16 ]. Detailed information on model architecture and training can be found in supplement S4.…”
Section: Methodsmentioning
confidence: 99%