2022
DOI: 10.1007/978-3-030-97672-9_34
|View full text |Cite
|
Sign up to set email alerts
|

The Classification of Oral Squamous Cell Carcinoma (OSCC) by Means of Transfer Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
5
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
1
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 13 publications
0
5
0
Order By: Relevance
“…The AUC value of the 3DCNN network model in the enhancement rate picture is 0.801, This is approximately 5% more than the Cost economy of the trial with a single upgraded picture. Ahmad Ridhauddin Abdul Rauf Et.al [11] present a study that Transfer learning was an attempt to use a class of deep learning techniques. To extract features from texture-based photos, the Inception V3 pre-trained convolutional neural network model is employed.…”
Section: Introductionmentioning
confidence: 99%
“…The AUC value of the 3DCNN network model in the enhancement rate picture is 0.801, This is approximately 5% more than the Cost economy of the trial with a single upgraded picture. Ahmad Ridhauddin Abdul Rauf Et.al [11] present a study that Transfer learning was an attempt to use a class of deep learning techniques. To extract features from texture-based photos, the Inception V3 pre-trained convolutional neural network model is employed.…”
Section: Introductionmentioning
confidence: 99%
“…With the most recent advancements in machine learning, numerous deep learning-based techniques, including convolutional neural network (CNN), pre-trained deep CNN networks [17], like Alexnet, VGG 16, VGG 19, ResNet 50 [18], MobileNet [19], multimodal fusion with CoaT (coat-lite-small), PiT (pooling based vision transformer pits-distilled-224), ViT (vision transformer small-patch16-384), ResNetV2 and ResNetY [20], and concatenated models of VGG 16, Inception V3 [21], have been proposed for the automated extraction of morphological features. After the feature extraction, the images were classified into normal and OSCC categories using different classifiers such as random forest [22], support vector machine (SVM) [10], extreme gradient boosting (XGBoost) with binary particle swarm optimization (BPSO) feature selection [23], K nearest neighbor (KNN) [10], duck patch optimization based deep learning method [24] and two pretrained models, ResNet 50 and DenseNet 201 [11]. However, as the number of layers of the network increases, the complexity also will increase.…”
Section: Introductionmentioning
confidence: 99%
“…[11] [18][20] [21][22] [23][24] using the public OSCC dataset, in terms of accuracy, precision and sensitivity. The results are summarised in Table…”
mentioning
confidence: 99%
“…Along with the need for oversight and result interpretation, automated diagnosis-related ethical issues should also be taken into mind. 4 Oral malignancy is a prevalent and life-threatening form of cancer. Recent advancements in ML have made it more convenient for doctors to utilize these techniques in medical image classification.…”
Section: Introductionmentioning
confidence: 99%
“…It is crucial to keep in mind that this study is only one examination. Along with the need for oversight and result interpretation, automated diagnosis‐related ethical issues should also be taken into mind 4 . Oral malignancy is a prevalent and life‐threatening form of cancer.…”
Section: Introductionmentioning
confidence: 99%