Background We aimed to develop and evaluate a deep learning-based method for fully automatic segmentation of knee joint MR imaging and quantitative computation of knee osteoarthritis (OA)-related imaging biomarkers. Material/Methods This retrospective study included 843 volumes of proton density-weighted fat suppression MR imaging. A convolutional neural network segmentation method with multiclass gradient harmonized Dice loss was trained and evaluated on 500 and 137 volumes, respectively. To assess potential morphologic biomarkers for OA, the volumes and thickness of cartilage and meniscus, and minimal joint space width (mJSW) were automatically computed and compared between 128 OA and 162 control data. Results The CNN segmentation model produced reasonably high Dice coefficients, ranging from 0.948 to 0.974 for knee bone compartments, 0.717 to 0.809 for cartilage, and 0.846 for both lateral and medial menisci. The OA-related biomarkers computed from automatic knee segmentation achieved strong correlation with those from manual segmentation: average intraclass correlations of 0.916, 0.899, and 0.876 for volume and thickness of cartilage, meniscus, and mJSW, respectively. Volume and thickness measurements of cartilage and mJSW were strongly correlated with knee OA progression. Conclusions We present a fully automatic CNN-based knee segmentation system for fast and accurate evaluation of knee joint images, and OA-related biomarkers such as cartilage thickness and mJSW were reliably computed and visualized in 3D. The results show that the CNN model can serve as an assistant tool for radiologists and orthopedic surgeons in clinical practice and basic research.
IntroductionMedical image analysis is of tremendous importance in serving clinical diagnosis, treatment planning, as well as prognosis assessment. However, the image analysis process usually involves multiple modality-specific software and relies on rigorous manual operations, which is time-consuming and potentially low reproducible.MethodsWe present an integrated platform - uAI Research Portal (uRP), to achieve one-stop analyses of multimodal images such as CT, MRI, and PET for clinical research applications. The proposed uRP adopts a modularized architecture to be multifunctional, extensible, and customizable.Results and DiscussionThe uRP shows 3 advantages, as it 1) spans a wealth of algorithms for image processing including semi-automatic delineation, automatic segmentation, registration, classification, quantitative analysis, and image visualization, to realize a one-stop analytic pipeline, 2) integrates a variety of functional modules, which can be directly applied, combined, or customized for specific application domains, such as brain, pneumonia, and knee joint analyses, 3) enables full-stack analysis of one disease, including diagnosis, treatment planning, and prognosis assessment, as well as full-spectrum coverage for multiple disease applications. With the continuous development and inclusion of advanced algorithms, we expect this platform to largely simplify the clinical scientific research process and promote more and better discoveries.
The objective of this research is to explore the value of whole-thyroid CT-based radiomics in predicting benign (noncancerous) and malignant thyroid nodules. The imaging and clinical data of 161 patients with thyroid nodules that were confirmed by pathology were retrospectively analyzed. The entire thyroid regions of interest (ROIs) were manually sketched for all 161 cases. After extracting CT radiomic features, the patients were divided into a training group (128 cases) and a test group (33 cases) according to the 4:1 ratio with stratified random sampling (fivefold cross validation). All the data were normalized by the maximum absolute value and screened using selection operator regression analysis and K best. The data generation model was trained by logistic regression. The effectiveness of the model in differentiating between benign and malignant thyroid nodules was validated by a receiver operating characteristic (ROC) curve. After data grouping, eigenvalue screening, and data training, the logistic regression model with the maximum absolute value normalized was constructed. For the training group, the area under the ROC curve (AUC) was 94.4% (95% confidence interval: 0.941–0.977); the sensitivity and specificity were 89.7% and 86.7%, respectively; and the diagnostic accuracy was 87.6%. For the test group, the AUC was 94.2% (95% confidence interval: 0.881–0.999); the sensitivity and specificity were 89.4% and 86.8%, respectively; and the diagnostic accuracy was 87.6%. The CT radiomic model of the entire thyroid gland is highly efficient in differentiating between benign and malignant thyroid nodules.
To develop a deep learning-based model for detecting rib fractures on chest X-Ray and to evaluate its performance based on a multicenter study. Chest digital radiography (DR) images from 18,631 subjects were used for the training, testing, and validation of the deep learning fracture detection model. We first built a pretrained model, a simple framework for contrastive learning of visual representations (simCLR), using contrastive learning with the training set. Then, simCLR was used as the backbone for a fully convolutional one-stage (FCOS) objective detection network to identify rib fractures from chest X-ray images. The detection performance of the network for four different types of rib fractures was evaluated using the testing set. A total of 127 images from Data-CZ and 109 images from Data-CH with the annotations for four types of rib fractures were used for evaluation. The results showed that for Data-CZ, the sensitivities of the detection model with no pretraining, pretrained ImageNet, and pretrained DR were 0.465, 0.735, and 0.822, respectively, and the average number of false positives per scan was five in all cases. For the Data-CH test set, the sensitivities of three different pretraining methods were 0.403, 0.655, and 0.748. In the identification of four fracture types, the detection model achieved the highest performance for displaced fractures, with sensitivities of 0.873 and 0.774 for the Data-CZ and Data-CH test sets, respectively, with 5 false positives per scan, followed by nondisplaced fractures, buckle fractures, and old fractures. A pretrained model can significantly improve the performance of the deep learning-based rib fracture detection based on X-ray images, which can reduce missed diagnoses and improve the diagnostic efficacy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.