This paper reported a study on the 3-dimensional deep-learning-based automatic diagnosis of nasal fractures. (1) Background: The nasal bone is the most protuberant feature of the face; therefore, it is highly vulnerable to facial trauma and its fractures are known as the most common facial fractures worldwide. In addition, its adhesion causes rapid deformation, so a clear diagnosis is needed early after fracture onset. (2) Methods: The collected computed tomography images were reconstructed to isotropic voxel data including the whole region of the nasal bone, which are represented in a fixed cubic volume. The configured 3-dimensional input data were then automatically classified by the deep learning of residual neural networks (3D-ResNet34 and ResNet50) with the spatial context information using a single network, whose performance was evaluated by 5-fold cross-validation. (3) Results: The classification of nasal fractures with simple 3D-ResNet34 and ResNet50 networks achieved areas under the receiver operating characteristic curve of 94.5% and 93.4% for binary classification, respectively, both indicating unprecedented high performance in the task. (4) Conclusions: In this paper, it is presented the possibility of automatic nasal bone fracture diagnosis using a 3-dimensional Resnet-based single classification network and it will improve the diagnostic environment with future research.
Background and Aim: Endoscopic ultrasound (EUS) is the most accurate diagnostic modality for polypoid lesions of the gallbladder (GB), but is limited by subjective interpretation. Deep learning-based artificial intelligence (AI) algorithms are under development. We evaluated the diagnostic performance of AI in differentiating polypoid lesions using EUS images. Methods: The diagnostic performance of the EUS-AI system with ResNet50 architecture was evaluated via three processes: training, internal validation, and testing using an AI development cohort of 1039 EUS images (836 GB polyps and 203 gallstones). The diagnostic performance was verified using an external validation cohort of 83 patients and compared with the performance of EUS endoscopists. Results: In the AI development cohort, we developed an EUS-AI algorithm and evaluated the diagnostic performance of the EUS-AI including sensitivity, specificity, positive predictive value, negative predictive value, and accuracy. For the differential diagnosis of neoplastic and non-neoplastic GB polyps, these values for EUS-AI were 57.9%, 96.5%, 77.8%, 91.6%, and 89.8%, respectively. In the external validation cohort, we compared diagnostic performances between EUS-AI and endoscopists. For the differential diagnosis of neoplastic and non-neoplastic GB polyps, the sensitivity and specificity were 33.3% and 96.1% for EUS-AI; they were 74.2% and 44.9%, respectively, for the endoscopists. Besides, the accuracy of the EUS-AI was between the accuracies of mid-level (66.7%) and expert EUS endoscopists (77.5%). Conclusions: This newly developed EUS-AI system showed favorable performance for the diagnosis of neoplastic GB polyps, with a performance comparable to that of EUS endoscopists.Y. J. S. is responsible for the technical and material support; analysis and interpretation of the data. D. K. L. analysis and interpretation of the data; critical revision of the article for important intellectual content. K. G. K. is responsible for the conception and design; case collection; critical revision of the article for important intellectual content; final approval of the article. J. H. C. is responsible for the conception and design; case collection; critical revision of the article for important intellectual content; final approval of the article.
This paper proposes a development of automatic rib sequence labeling systems on chest computed tomography (CT) images with two suggested methods and three-dimensional (3D) region growing. In clinical practice, radiologists usually define anatomical terms of location depending on the rib’s number. Thus, with the manual process of labeling 12 pairs of ribs and counting their sequence, it is necessary to refer to the annotations every time the radiologists read chest CT. However, the process is tedious, repetitive, and time-consuming as the demand for chest CT-based medical readings has increased. To handle the task efficiently, we proposed an automatic rib sequence labeling system and implemented comparison analysis on two methods. With 50 collected chest CT images, we implemented intensity-based image processing (IIP) and a convolutional neural network (CNN) for rib segmentation on this system. Additionally, three-dimensional (3D) region growing was used to classify each rib’s label and put in a sequence label. The IIP-based method reported a 92.0% and the CNN-based method reported a 98.0% success rate, which is the rate of labeling appropriate rib sequences over whole pairs (1st to 12th) for all slices. We hope for the applicability thereof in clinical diagnostic environments by this method-efficient automatic rib sequence labeling system.
Endoscopic ultrasound (EUS) is the most accurate diagnostic modality for polypoid lesions of the gallbladder (GB), but is limited by subjective interpretation. We evaluated the diagnostic performance of deep learning-based artificial intelligence (AI) in differentiating polypoid lesions using EUS images. The diagnostic performance of the EUS-AI system with ResNet50 architecture was evaluated via three processes: training, internal validation, and testing. The diagnostic performance was also verified using an external validation cohort and compared with the performance of EUS endoscopists. In the AI development cohort, the diagnostic performance of EUS-AI including sensitivity, specificity, positive predictive value, negative predictive value and accuracy. For the differential diagnosis of neoplastic and non-neoplastic GB polyps, these values for EUS-AI were 77.8%, 91.6%, 57.9%, 96.5%, and 89.8%, respectively. In the external validation cohort, the differential diagnosis of neoplastic and non-neoplastic GB polyps, these values were 60.3%, 77.4%, 36.2%, 90.2%, and 74.4%, respectively, for EUS-AI; they were 74.2%, 44.9%, 75.4%, 46.2%, and 65.3%, respectively, for the endoscopists. The accuracy of the EUS-AI was between the accuracies of mid-level (66.7%) and expert EUS endoscopists (77.5%). This EUS-AI system showed favorable performance for the diagnosis of neoplastic GB polyps, with a performance comparable to that of EUS endoscopists.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.