. Purpose: Muscle, bone, and fat segmentation from thigh images is essential for quantifying body composition. Voxelwise image segmentation enables quantification of tissue properties including area, intensity, and texture. Deep learning approaches have had substantial success in medical image segmentation, but they typically require a significant amount of data. Due to the high cost of manual annotation, training deep learning models with limited human label data is desirable, but it is a challenging problem. Approach: Inspired by transfer learning, we proposed a two-stage deep learning pipeline to address the thigh and lower leg segmentation issue. We studied three datasets, 3022 thigh slices and 8939 lower leg slices from the BLSA dataset and 121 thigh slices from the GESTALT study. First, we generated pseudo labels for thigh based on approximate handcrafted approaches using CT intensity and anatomical morphology. Then, those pseudo labels were fed into deep neural networks to train models from scratch. Finally, the first stage model was loaded as the initialization and fine-tuned with a more limited set of expert human labels of the thigh. Results: We evaluated the performance of this framework on 73 thigh CT images and obtained an average Dice similarity coefficient (DSC) of 0.927 across muscle, internal bone, cortical bone, subcutaneous fat, and intermuscular fat. To test the generalizability of the proposed framework, we applied the model on lower leg images and obtained an average DSC of 0.823. Conclusions: Approximated handcrafted pseudo labels can build a good initialization for deep neural networks, which can help to reduce the need for, and make full use of, human expert labeled data.
Accurate and reproducible tissue identification is essential for understanding structural and functional changes that may occur naturally with aging, or because of a chronic disease, or in response to intervention therapies. Peripheral quantitative computed tomography (pQCT) is regularly employed for body composition studies, especially for the structural and material properties of the bone. Furthermore, pQCT acquisition requires low radiation dose and the scanner is compact and portable. However, pQCT scans have limited spatial resolution and moderate SNR. pQCT image quality is frequently degraded by involuntary subject movement during image acquisition. These limitations may often compromise the accuracy of tissue quantification, and emphasize the need for automated and robust quantification methods. We propose a tissue identification and quantification methodology that addresses image quality limitations and artifacts, with increased interest in subject movement. We introduce a multi-atlas image segmentation (MAIS) framework for semantic segmentation of hard and soft tissues in pQCT scans at multiple levels of the lower leg. We describe the stages of statistical atlas generation, deformable registration and multi-tissue classifier fusion. We evaluated the performance of our methodology using multiple deformable registration approaches against reference tissue masks. We also evaluated the performance of conventional model-based segmentation against the same reference data to facilitate comparisons. We studied the effect of subject movement on tissue segmentation quality. We also applied the top performing method to a larger out-of-sample dataset and report the quantification results. The results show that multi-atlas image segmentation with diffeomorphic deformation and probabilistic label fusion produces very good quality over all tissues, even for scans with significant quality degradation. The application of our technique to the larger dataset reveals trends of age-related body composition changes that are consistent with the literature. Because of its robustness to subject motion artifacts, our MAIS methodology enables analysis of larger number of scans than conventional state-of-the-art methods. Automated analysis of both soft and hard tissues in pQCT is another contribution of this work.
Diagnosis of breast cancer is often achieved through expert radiologist examination of medical images such as mammograms. Computer-aided diagnosis (CADx) methods can be useful tools in the medical field with applications such as aiding radiologists in making diagnosis decisions. However, such CADx systems require a sufficient amount of data to train on, in conjunction with efficient machine learning techniques. Our Spatially Localized Ensembles Sparse Analysis using Deep Features (DF-SLESA) machine learning model uses local information of features extracted from deep neural networks to learn and classify breast imaging patterns based on sparse approximations. We have also developed a new technique of patch sampling for learning sparse approximations and making classification decisions that we denote as PatchSample decomposition. The PatchSample method differs from our previous approach, our BlockBoost method, in that larger dictionaries are constructed that hold not just spatial-specific information, but a larger collective of visual information from all locations in the region of interest (ROI). Of note is that we trained and tested our method on a merged dataset of mammograms obtained from two sources. Experimental results have reached up to 67.80% classification accuracy (ACC) and 73.21% area under the ROC curve (AUC) using PatchSample decomposition on a merged dataset consisting of the MLO view regions of interest of the MIAS and CBIS-DDSM datasets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.