Oral cancer is a growing health issue in a number of low- and middle-income countries (LMIC), particularly in South and Southeast Asia. The described dual-modality, dual-view, point-of-care oral cancer screening device, developed for high-risk populations in remote regions with limited infrastructure, implements autofluorescence imaging (AFI) and white light imaging (WLI) on a smartphone platform, enabling early detection of pre-cancerous and cancerous lesions in the oral cavity with the potential to reduce morbidity, mortality, and overall healthcare costs. Using a custom Android application, this device synchronizes external light-emitting diode (LED) illumination and image capture for AFI and WLI. Data is uploaded to a cloud server for diagnosis by a remote specialist through a web app, with the ability to transmit triage instructions back to the device and patient. Finally, with the on-site specialist’s diagnosis as the gold-standard, the remote specialist and a convolutional neural network (CNN) were able to classify 170 image pairs into ‘suspicious’ and ‘not suspicious’ with sensitivities, specificities, positive predictive values, and negative predictive values ranging from 81.25% to 94.94%.
Background Surgery is the main modality of cure for solid cancers and was prioritised to continue during COVID-19 outbreaks. This study aimed to identify immediate areas for system strengthening by comparing the delivery of elective cancer surgery during the COVID-19 pandemic in periods of lockdown versus light restriction. Methods This international, prospective, cohort study enrolled 20 006 adult (≥18 years) patients from 466 hospitals in 61 countries with 15 cancer types, who had a decision for curative surgery during the COVID-19 pandemic and were followed up until the point of surgery or cessation of follow-up (Aug 31, 2020). Average national Oxford COVID-19 Stringency Index scores were calculated to define the government response to COVID-19 for each patient for the period they awaited surgery, and classified into light restrictions (index <20), moderate lockdowns (20–60), and full lockdowns (>60). The primary outcome was the non-operation rate (defined as the proportion of patients who did not undergo planned surgery). Cox proportional-hazards regression models were used to explore the associations between lockdowns and non-operation. Intervals from diagnosis to surgery were compared across COVID-19 government response index groups. This study was registered at ClinicalTrials.gov , NCT04384926 . Findings Of eligible patients awaiting surgery, 2003 (10·0%) of 20 006 did not receive surgery after a median follow-up of 23 weeks (IQR 16–30), all of whom had a COVID-19-related reason given for non-operation. Light restrictions were associated with a 0·6% non-operation rate (26 of 4521), moderate lockdowns with a 5·5% rate (201 of 3646; adjusted hazard ratio [HR] 0·81, 95% CI 0·77–0·84; p<0·0001), and full lockdowns with a 15·0% rate (1775 of 11 827; HR 0·51, 0·50–0·53; p<0·0001). In sensitivity analyses, including adjustment for SARS-CoV-2 case notification rates, moderate lockdowns (HR 0·84, 95% CI 0·80–0·88; p<0·001), and full lockdowns (0·57, 0·54–0·60; p<0·001), remained independently associated with non-operation. Surgery beyond 12 weeks from diagnosis in patients without neoadjuvant therapy increased during lockdowns (374 [9·1%] of 4521 in light restrictions, 317 [10·4%] of 3646 in moderate lockdowns, 2001 [23·8%] of 11 827 in full lockdowns), although there were no differences in resectability rates observed with longer delays. Interpretation Cancer surgery systems worldwide were fragile to lockdowns, with one in seven patients who were in regions with full lockdowns not undergoing planned surgery and experiencing longer preoperative delays. Although short-term oncological outcomes were not compromised in those selected for surgery, delays and non-operations might lead to long-term reductions in survival. During current and future periods of societal restriction, the resilience of elective surgery systems requires strengthening, which might include...
With the goal to screen high-risk populations for oral cancer in low-and middleincome countries (LMICs), we have developed a low-cost, portable, easy to use smartphonebased intraoral dual-modality imaging platform. In this paper we present an image classification approach based on autofluorescence and white light images using deep learning methods. The information from the autofluorescence and white light image pair is extracted, calculated, and fused to feed the deep learning neural networks. We have investigated and compared the performance of different convolutional neural networks, transfer learning, and several regularization techniques for oral cancer classification. Our experimental results demonstrate the effectiveness of deep learning methods in classifying dual-modal images for oral cancer detection.
. Significance: Oral cancer is among the most common cancers globally, especially in low- and middle-income countries. Early detection is the most effective way to reduce the mortality rate. Deep learning-based cancer image classification models usually need to be hosted on a computing server. However, internet connection is unreliable for screening in low-resource settings. Aim: To develop a mobile-based dual-mode image classification method and customized Android application for point-of-care oral cancer detection. Approach: The dataset used in our study was captured among 5025 patients with our customized dual-modality mobile oral screening devices. We trained an efficient network MobileNet with focal loss and converted the model into TensorFlow Lite format. The finalized lite format model is and ideal for smartphone platform operation. We have developed an Android smartphone application in an easy-to-use format that implements the mobile-based dual-modality image classification approach to distinguish oral potentially malignant and malignant images from normal/benign images. Results: We investigated the accuracy and running speed on a cost-effective smartphone computing platform. It takes to process one image pair with the Moto G5 Android smartphone. We tested the proposed method on a standalone dataset and achieved 81% accuracy for distinguishing normal/benign lesions from clinically suspicious lesions, using a gold standard of clinical impression based on the review of images by oral specialists. Conclusions: Our study demonstrates the effectiveness of a mobile-based approach for oral cancer screening in low-resource settings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.