2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) 2019
DOI: 10.1109/bibm47256.2019.8983226
|View full text |Cite
|
Sign up to set email alerts
|

Texture-based Deep Learning for Effective Histopathological Cancer Image Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 14 publications
(9 citation statements)
references
References 10 publications
0
7
0
Order By: Relevance
“…A patch-wise pretrained model is optimized on an objective function of a target task (e.g., classification, survival analysis) using entire patches of the training data, and each patch produces a predictive result (e.g., cancer probability) as a patch-wise analysis. In this study, we used Google-Brain (GB) 12 and CAncer-Texture Network (CAT-Net) 13 as pretrained models.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…A patch-wise pretrained model is optimized on an objective function of a target task (e.g., classification, survival analysis) using entire patches of the training data, and each patch produces a predictive result (e.g., cancer probability) as a patch-wise analysis. In this study, we used Google-Brain (GB) 12 and CAncer-Texture Network (CAT-Net) 13 as pretrained models.…”
Section: Methodsmentioning
confidence: 99%
“…Patch-wise approaches typically consist of the three steps: (1) a WSI is divided into smaller patches; (2) each of the patches computes a probability score of a diagnosis or a clinical outcome (e.g., a probability that the patch region is cancerous); and (3) the probability scores of the patches are combined into an entire probability map of the WSI 11 . For instance, Convolutional Neural Networks (CNNs) were trained with fixed-size patch images (e.g., 299 × 299 pixels), and the patch-wise results localized cancerous regions in a WSI 12,13 . Patches in annotated Regions of Interest (ROI) were used to train a CNN-based model to predict risk scores of patches 14,15 .…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…Multi-gigapixel whole-slide pathology images can be processed with deep learning in order to detect breast cancer [31], skin cancer [32], [33], prostate cancer [33], lung cancer [33], cervical cancer [34] and cancer in the digestive tract [35]. Some methods are even able to detect the cancer subtypes [33] or detect the spread of cancer to lymph nodes (metastasis) [36].…”
Section: A Medical and Biomedical Image Analysismentioning
confidence: 99%
“…Therefore, it would be desirable to help pathologists more accurately determine whether a patient belongs to the EBV group only based on cost-efficient analysis of pathological images. Most recent works [6][7][8] focused on the task of gastric cancer classification into positive and negative categories, with the exception of a recent work 9 where a deep convolutional neural network (CNN) with the ResNet backbone 10 was trained to predict the molecular subtypes of gastric cancer called microsatellite instability (MSI) and microsatellite stability (MSS).…”
Section: Introductionmentioning
confidence: 99%