Aim/Purpose: The aim of this study was to develop a prototype of an information-generating computer tool designed to automatically map the dental restorations in a panoramic radiograph. Background: A panoramic radiograph is an external dental radiograph of the oro-maxillofacial region, obtained with minimal discomfort and significantly lower radiation dose compared to full mouth intra-oral radiographs or cone-beam computed tomography (CBCT) imaging. Currently, however, a radiologic informative report is not regularly designed for a panoramic radiograph, and the referring doctor needs to interpret the panoramic radiograph manually, according to his own judgment. Methodology: An algorithm, based on techniques of computer vision and machine learning, was developed to automatically detect and classify dental restorations in a panoramic radiograph, such as fillings, crowns, root canal treatments and implants. An experienced dentist evaluated 63 panoramic anonymized images and marked on them, manually, 316 various restorations. The images were automatically cropped to obtain a region of interest (ROI) containing only the upper and lower alveolar ridges. The algorithm automatically segmented the restorations using a local adaptive threshold. In order to improve detection of the dental restorations, morphological operations such as opening, closing and hole-filling were employed. Since each restoration is characterized by a unique shape and unique gray level distribution, 20 numerical features describing the contour and the texture were extracted in order to classify the restorations. Twenty-two different machine learning models were evaluated, using a cross-validation approach, to automatically classify the dental restorations into 9 categories. Contribution: The computer tool will provide automatic detection and classification of dental restorations, as an initial step toward automatic detection of oral pathologies in a panoramic radiograph. The use of this algorithm will aid in generating a radiologic report which includes all the information required to improve patient management and treatment outcome. Findings: The automatic cropping of the ROI in the panoramic radiographs, in order to include only the alveolar ridges, was successful in 97% of the cases. The developed algorithm for detection and classification of the dental restorations correctly detected 95% of the restorations. ‘Weighted k-NN’ was the machine-learning model that yielded the best classification rate of the dental restorations - 92%. Impact on Society: Information that will be extracted automatically from the panoramic image will provide a reliable, reproducible radiographic report, currently unavailable, which will assist the clinician as well as improve patients’ reliance on the diagnosis. Future Research: The algorithm for automatic detection and classification of dental restorations in panoramic imaging must be trained on a larger dataset to improve the results. This algorithm will then be used as a preliminary stage for automatically detecting incidental oral pathologies exhibited in the panoramic images.
Objectives: To develop an automated deep-learning algorithm for detection and 3D segmentation of incidental bone lesions in maxillofacial CBCT scans.Methods: Dataset included 82 cone beam CT (CBCT) scans, 41 with histologically confirmed benign bone lesions and 41 control scans (with no lesion), obtained by three CBCT devices with diverse imaging protocols. Lesions were marked in all axial slices by experienced maxillofacial radiologists. All cases were divided into sub-datasets: training (20,214 axial images), validation (4,530 axial images) and testing (6,795 axial images). A Mask-RCNN algorithm segmented the bone lesions in each axial slice. Analysis of sequential slices was used for improving the Mask-RCNN performance and classifying each CBCT scan as containing bone lesions or not. Finally, the algorithm generated a 3D segmentation of the lesions and calculated their volumes.Results: The algorithm correctly classified all CBCT cases as containing bone lesions or not, with an accuracy of 100%. The algorithm detected the bone lesion in each axial image with high sensitivity (95.9%) and high precision (98.9%) with an average dice coefficient of 83.5%. Conclusions:The developed algorithm detected and segmented bone lesions in CBCT scans with high accuracy and may serve as a computerized tool for detecting incidental bone lesions in CBCT imaging.
Objectives: This study aimed to develop an automated deep-learning algorithm for the detection and 3D segmentation of incidental jaw lesions in maxillofacial CBCT scans.Materials and Methods: The dataset included 82 CBCT scans with and without histologically confirmed benign bone lesions, obtained from three CBCT devices using different imaging protocols. The dataset consisted of axial CBCT images and was divided into training dataset (20,214 axial images), validation dataset (4,530 axial images) and testing dataset (6,795 axial images). A Mask-RCNN based deep-learning algorithm segmented the bone lesion in each axial image. The analysis of sequential slices improved the mask RCNN performance and assisted in classifying each CBCT case as containing bone lesions or not. Thereafter, the algorithm generated a 3D segmentation of the lesions. Results: The accuracy of the algorithm for classifying each CBCT case as either containing bone lesions or not, was 100%. The algorithm performance for detecting lesions in individual axial images showed high sensitivity (95.9%) and high precision (98.9%).Conclusions: Our deep learning algorithm can detect incidental bone lesions in CBCT scans with high accuracy, high sensitivity and high precision and is highly recommended for bone lesion detection and follow-up in CBCT imaging. Clinical relevance: The increasing number of CBCT scans performed worldwide dictates a clinical need for developing an automated tool, which will review, detect and volumetrically demonstrate incidental bone lesions in CBCT scans with high accuracy and low false-positive rate.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.