2023
DOI: 10.1016/j.imed.2023.01.004
|View full text |Cite
|
Sign up to set email alerts
|

Importance of complementary data to histopathological image analysis of oral leukoplakia and carcinoma using deep neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 19 publications
0
2
0
Order By: Relevance
“…Textural characteristics were extracted from the images as well by converting them to the Gray Level Co-Occurrence Matrix (GLCM) [13] and the Local Binary Pattern (LBP) [14] because the GLCM determines an image's texture by determining how frequently pairs of pixels with specific values and spatial relationships appear in the image [15] and the local spatial patterns and the contrast in the grey scale in an image are effectively captured by LBP descriptors [16]. With the most recent advancements in machine learning, numerous deep learning-based techniques, including convolutional neural network (CNN), pre-trained deep CNN networks [17], like Alexnet, VGG 16, VGG 19, ResNet 50 [18], MobileNet [19], multimodal fusion with CoaT (coat-lite-small), PiT (pooling based vision transformer pits-distilled-224), ViT (vision transformer small-patch16-384), ResNetV2 and ResNetY [20], and concatenated models of VGG 16, Inception V3 [21], have been proposed for the automated extraction of morphological features. After the feature extraction, the images were classified into normal and OSCC categories using different classifiers such as random forest [22], support vector machine (SVM) [10], extreme gradient boosting (XGBoost) with binary particle swarm optimization (BPSO) feature selection [23], K nearest neighbor (KNN) [10], duck patch optimization based deep learning method [24] and two pretrained models, ResNet 50 and DenseNet 201 [11].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Textural characteristics were extracted from the images as well by converting them to the Gray Level Co-Occurrence Matrix (GLCM) [13] and the Local Binary Pattern (LBP) [14] because the GLCM determines an image's texture by determining how frequently pairs of pixels with specific values and spatial relationships appear in the image [15] and the local spatial patterns and the contrast in the grey scale in an image are effectively captured by LBP descriptors [16]. With the most recent advancements in machine learning, numerous deep learning-based techniques, including convolutional neural network (CNN), pre-trained deep CNN networks [17], like Alexnet, VGG 16, VGG 19, ResNet 50 [18], MobileNet [19], multimodal fusion with CoaT (coat-lite-small), PiT (pooling based vision transformer pits-distilled-224), ViT (vision transformer small-patch16-384), ResNetV2 and ResNetY [20], and concatenated models of VGG 16, Inception V3 [21], have been proposed for the automated extraction of morphological features. After the feature extraction, the images were classified into normal and OSCC categories using different classifiers such as random forest [22], support vector machine (SVM) [10], extreme gradient boosting (XGBoost) with binary particle swarm optimization (BPSO) feature selection [23], K nearest neighbor (KNN) [10], duck patch optimization based deep learning method [24] and two pretrained models, ResNet 50 and DenseNet 201 [11].…”
Section: Introductionmentioning
confidence: 99%
“…[11] [18][20] [21][22] [23][24] using the public OSCC dataset, in terms of accuracy, precision and sensitivity. The results are summarised in Table…”
mentioning
confidence: 99%