2022
DOI: 10.1053/j.gastro.2022.02.025
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning-Based Classification of Hepatocellular Nodular Lesions on Whole-Slide Histopathologic Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 64 publications
(27 citation statements)
references
References 33 publications
0
27
0
Order By: Relevance
“…Jin et al (10) improved this situation by developing a specially designed slide scanner that can automatically divide each WSIs into 300 patches. Cheng et al (28) invited three subspecialists to manually stretch out regions of interest (ROIs) in images, which is convenient but labor-intensive. Our study focused on the light-strategy that could acquire images from the representative biopsy tissues in a short period.…”
Section: Discussionmentioning
confidence: 99%
“…Jin et al (10) improved this situation by developing a specially designed slide scanner that can automatically divide each WSIs into 300 patches. Cheng et al (28) invited three subspecialists to manually stretch out regions of interest (ROIs) in images, which is convenient but labor-intensive. Our study focused on the light-strategy that could acquire images from the representative biopsy tissues in a short period.…”
Section: Discussionmentioning
confidence: 99%
“…Then three networks based on EfficientNet-B5 were trained with the three commonly used learning rates of 8e-4, 1e-3, and 1e-2 on the 161,892 patches at 40× resolution from the 127 WSIs in the training cohort, with the patients' distinct outcomes as ground truth. [30][31][32] We first chose the learning rate of 8e-4 because it is a default value configured in our medical image classification. Then, two additional learning rates of 1e-3 and 1e-2 were adopted because we aimed to see if other learning rates were helpful for performance improvement.…”
Section: Training Of Deep Learning Models For Classifications Of Trea...mentioning
confidence: 99%
“…The abovementioned resultsprovide a glimpse into the current state of CNN applications on ECG analysis. We used 2 commonly used models VGGNet16 and ResNet50, pretrained for image classification to perform transfer learning [24,26]. The AUC values were 0.88, 0.87, and 0.76 for our CNN, and the VGGNet16, and ResNet50 models, respectively (S3 Fig) . Our CNN performs better than the ResNet50, and has almost the same performance as the VGGNet16.…”
Section: Plos Onementioning
confidence: 99%