Background Adolescent idiopathic scoliosis (AIS) is a three-dimensional spinal deformity that predominantly occurs in girls. While skeletal growth and maturation influence the development of AIS, accurate prediction of curve progression remains difficult because the prognosis for deformity differs among individuals. The purpose of this study is to develop a new diagnostic platform using a deep convolutional neural network (DCNN) that can predict the risk of scoliosis progression in patients with AIS. Methods Fifty-eight patients with AIS (49 females and 9 males; mean age: 12.5 ± 1.4 years) and a Cobb angle between 10 and 25 degrees (mean angle: 18.7 ± 4.5) were divided into two groups: those whose Cobb angle increased by more than 10 degrees within two years (progression group, 28 patients) and those whose Cobb angle changed by less than 5 degrees (non-progression group, 30 patients). The X-ray images of three regions of interest (ROIs) (lung [ROI1], abdomen [ROI2], and total spine [ROI3]), were used as the source data for learning and prediction. Five spine surgeons also predicted the progression of scoliosis by reading the X-rays in a blinded manner. Results The prediction performance of the DCNN for AIS curve progression showed an accuracy of 69% and an area under the receiver-operating characteristic curve of 0.70 using ROI3 images, whereas the diagnostic performance of the spine surgeons showed inferior at 47%. Transfer learning with a pretrained DCNN contributed to improved prediction accuracy. Conclusion Our developed method to predict the risk of scoliosis progression in AIS by using a DCNN could be a valuable tool in decision-making for therapeutic interventions for AIS.
BACKGROUND: Imaging examinations are crucial for diagnosing acute ischemic stroke, and knowledge of a patient’s body weight is necessary for safe examination. To perform examinations safely and rapidly, estimating body weight using head computed tomography (CT) scout images can be useful. OBJECTIVE: This study aims to develop a new method for estimating body weight using head CT scout images for contrast-enhanced CT examinations in patients with acute ischemic stroke. METHODS: This study investigates three weight estimation techniques. The first utilizes total pixel values from head CT scout images. The second one employs the Xception model, which was trained using 216 images with leave-one-out cross-validation. The third one is an average of the first two estimates. Our primary focus is the weight estimated from this third new method. RESULTS: The third new method, an average of the first two weight estimation methods, demonstrates moderate accuracy with a 95% confidence interval of ±14.7 kg. The first method, using only total pixel values, has a wider interval of ±20.6 kg, while the second method, a deep learning approach, results in a 95% interval of ±16.3 kg. CONCLUSIONS: The presented new method is a potentially valuable support tool for medical staff, such as doctors and nurses, in estimating weight during emergency examinations for patients with acute conditions such as stroke when obtaining accurate weight measurements is not easily feasible.
BACKGROUND: Head computed tomography (CT) is a commonly used imaging modality in radiology facilities. Since multiplanar reconstruction (MPR) processing can produce different results depending on the medical staff in charge, there is a possibility that the antemortem and postmortem images of the same person could be assessed and identified differently. OBJECTIVE: To propose and test a new automatic MPR method in order to address and overcome this limitation. METHODS: Head CT images of 108 cases are used. We employ the standardized transformation of statistical parametric mapping 8. The affine transformation parameters are obtained by standardizing the captured CT images. Automatic MPR processing is performed by using this parameter. The sphenoidal sinus of the orbitomeatal cross section of the automatic MPR processing of this study and the conventional manual MPR processing are cropped with a matrix size of 128×128, and the value of zero mean normalized correlation coefficient is calculated. RESULTS: The computed zero mean normalized cross-correlation coefficient (Rzncc) of≥0.9, 0.8≤Rzncc < 0.9 and 0.7≤Rzncc < 0.8 are achieved in 105 cases (97.2%), 2 cases (1.9%), and 1 case (0.9%), respectively. The average Rzncc was 0.96±0.03. CONCLUSION: Using the proposed new method in this study, MPR processing with guaranteed accuracy is efficiently achieved.
This study proposes a deep convolutional neural network (DCNN) classification for the quality control and validation of breast positioning criteria in mammography. A total of 1631 mediolateral oblique mammographic views were collected from an open database. We designed two main steps for mammographic verification: automated detection of the positioning part and classification of three scales that determine the positioning quality using DCNNs. After acquiring labeled mammograms with three scales visually evaluated based on guidelines, the first step was automatically detecting the region of interest of the subject part by image processing. The next step was classifying mammographic positioning accuracy into three scales using four representative DCNNs. The experimental results showed that the DCNN model achieved the best positioning classification accuracy of 0.7597 using VGG16 in the inframammary fold and a classification accuracy of 0.6996 using Inception-v3 in the nipple profile. Furthermore, using the softmax function, the breast positioning criteria could be evaluated quantitatively by presenting the predicted value, which is the probability of determining positioning accuracy. The proposed method can be quantitatively evaluated without the need for an individual qualitative evaluation and has the potential to improve the quality control and validation of breast positioning criteria in mammography.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.