Background
Oral cancer is a deadly disease among the most common malignant tumors worldwide, and it has become an increasingly important public health problem in developing and low‐to‐middle income countries. This study aims to use the convolutional neural network (CNN) deep learning algorithms to develop an automated classification and detection model for oral cancer screening.
Methods
The study included 700 clinical oral photographs, collected retrospectively from the oral and maxillofacial center, which were divided into 350 images of oral squamous cell carcinoma and 350 images of normal oral mucosa. The classification and detection models were created by using DenseNet121 and faster R‐CNN, respectively. Four hundred and ninety images were randomly selected as training data. In addition, 70 and 140 images were assigned as validating and testing data, respectively.
Results
The classification accuracy of DenseNet121 model achieved a precision of 99%, a recall of 100%, an F1 score of 99%, a sensitivity of 98.75%, a specificity of 100%, and an area under the receiver operating characteristic curve of 99%. The detection accuracy of a faster R‐CNN model achieved a precision of 76.67%, a recall of 82.14%, an F1 score of 79.31%, and an area under the precision‐recall curve of 0.79.
Conclusion
The DenseNet121 and faster R‐CNN algorithm were proved to offer the acceptable potential for classification and detection of cancerous lesions in oral photographic images.
Artificial intelligence (AI) applications in oncology have been developed rapidly with reported successes in recent years. This work aims to evaluate the performance of deep convolutional neural network (CNN) algorithms for the classification and detection of oral potentially malignant disorders (OPMDs) and oral squamous cell carcinoma (OSCC) in oral photographic images. A dataset comprising 980 oral photographic images was divided into 365 images of OSCC, 315 images of OPMDs and 300 images of non-pathological images. Multiclass image classification models were created by using DenseNet-169, ResNet-101, SqueezeNet and Swin-S. Multiclass object detection models were fabricated by using faster R-CNN, YOLOv5, RetinaNet and CenterNet2. The AUC of multiclass image classification of the best CNN models, DenseNet-196, was 1.00 and 0.98 on OSCC and OPMDs, respectively. The AUC of the best multiclass CNN-base object detection models, Faster R-CNN, was 0.88 and 0.64 on OSCC and OPMDs, respectively. In comparison, DenseNet-196 yielded the best multiclass image classification performance with AUC of 1.00 and 0.98 on OSCC and OPMD, respectively. These values were inline with the performance of experts and superior to those of general practictioners (GPs). In conclusion, CNN-based models have potential for the identification of OSCC and OPMDs in oral photographic images and are expected to be a diagnostic tool to assist GPs for the early detection of oral cancer.
Abstracts
Background
The aim of this study was to evaluate the most effective combination of autoregressive integrated moving average (ARIMA), a time series model, and association rule mining (ARM) techniques to identify meaningful prognostic factors and predict the number of cases for efficient COVID-19 crisis management.
Methods
The 3685 COVID-19 patients admitted at Thailand’s first university field hospital following the four waves of infections from March 2020 to August 2021 were analyzed using the autoregressive integrated moving average (ARIMA), its derivative to exogenous variables (ARIMAX), and association rule mining (ARM).
Results
The ARIMA (2, 2, 2) model with an optimized parameter set predicted the number of the COVID-19 cases admitted at the hospital with acceptable error scores (R2 = 0.5695, RMSE = 29.7605, MAE = 27.5102). Key features from ARM (symptoms, age, and underlying diseases) were selected to build an ARIMAX (1, 1, 1) model, which yielded better performance in predicting the number of admitted cases (R2 = 0.5695, RMSE = 27.7508, MAE = 23.4642). The association analysis revealed that hospital stays of more than 14 days were related to the healthcare worker patients and the patients presented with underlying diseases. The worsening cases that required referral to the hospital ward were associated with the patients admitted with symptoms, pregnancy, metabolic syndrome, and age greater than 65 years old.
Conclusions
This study demonstrated that the ARIMAX model has the potential to predict the number of COVID-19 cases by incorporating the most associated prognostic factors identified by ARM technique to the ARIMA model, which could be used for preparation and optimal management of hospital resources during pandemics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.