Quantitative analysis of the dynamic properties of thoraco-abdominal organs such as lungs during respiration could lead to more accurate surgical planning for disorders such as Thoracic Insufficiency Syndrome (TIS). This analysis can be done from semi-automatic delineations of the aforesaid organs in scans of the thoraco-abdominal body region. Dynamic magnetic resonance imaging (dMRI) is a practical and preferred imaging modality for this application, although automatic segmentation of the organs in these images is very challenging. In this paper, we describe an auto-segmentation system we built and evaluated based on dMRI acquisitions from 95 healthy subjects. For the three recognition approaches, the system achieves a best average location error (LE) of about 1 voxel for the lungs. The standard deviation (SD) of LE is about 1-2 voxels. For the delineation approach, the average Dice coefficient (DC) is about 0.95 for the lungs. The standard deviation of DC is about 0.01 to 0.02 for the lungs. The system seems to be able to cope with the challenges posed by low resolution, motion blur, inadequate contrast, and image intensity non-standardness quite well. We are in the process of testing its effectiveness on TIS patient dMRI data and on other thoraco-abdominal organs including liver, kidneys, and spleen.
Recently, deep learning networks have achieved considerable success in segmenting organs in medical images. Several methods have used volumetric information with deep networks to achieve segmentation accuracy. However, these networks suffer from interference, risk of overfitting, and low accuracy as a result of artifacts, in the case of very challenging objects like the brachial plexuses. In this paper, to address these issues, we synergize the strengths of high-level human knowledge (i.e., Natural Intelligence (NI)) with deep learning (i.e., Artificial Intelligence (AI)) for recognition and delineation of the thoracic Brachial Plexuses (BPs) in Computed Tomography (CT) images. We formulate an anatomy-guided deep learning hybrid intelligence approach for segmenting thoracic right and left brachial plexuses consisting of two key stages. In the first stage (AAR-R), objects are recognized based on a previously created fuzzy anatomy model of the body region with its key organs relevant for the task at hand wherein high-level human anatomic knowledge is precisely codified. The second stage (DL-D) uses information from AAR-R to limit the search region to just where each object is most likely to reside and performs encoder-decoder delineation in slices. The proposed method is tested on a dataset that consists of 125 images of the thorax acquired for radiation therapy planning of tumors in the thorax and achieves a Dice coefficient of 0.659.
Auto-segmentation of medical images is critical to boost precision radiology and radiation oncology efficiency, thereby improving medical quality for both health care practitioners and patients. An appropriate metric to evaluate autosegmentation results is one of the significant tools necessary for building an effective, robust, and practical autosegmentation technique. However, by comparing the predicted segmentation with the ground truth, currently widely-used metrics usually focus on the overlapping area (Dice Coefficient) or the most severe shifting of the boundary (Hausdorff Distance), which seem inconsistent with human reader behaviors. Human readers usually verify and correct autosegmentation contours and then apply the modified segmentation masks to guide clinical application in diagnosis or treatment. A metric called Mendability Index (MI) is proposed to better estimate the effort required for manually editing the auto-segmentations of objects of interest in medical images so that the segmentations become acceptable for the application at hand. Considering different human behaviors for different errors, MI classifies auto-segmented errors into three types with different quantitative behaviors. The fluctuation of human subjective delineation is also considered in MI. 505 3D computed tomography (CT) auto-segmentations consisting of 6 objects from 3 institutions with the corresponding ground truth and the recorded manual mending time needed by experts are used to validate the performance of the proposed MI. The correlation between the time for editing with the segmentation metrics demonstrates that MI is generally more suitable for indicating mending efforts than Dice Coefficient or Hausdorff Distance, suggesting that MI may be an effective metric to quantify the clinical value of auto-segmentations.
In this paper, we propose a novel pipeline for conducting disease quantification in positron emission tomography/computed tomography (PET/CT) images on anatomically pre-defined objects. The pipeline is composed of standardized uptake value (SUV) standardization, object segmentation, and disease quantification (DQ). DQ is conducted on non-linearly standardized PET images and masks of target objects derived from CT images. Total lesion burden (TLB) is quantified by estimating normal metabolic activity (TMAn) in the object and subtracting this entity from total metabolic activity (TMA) of the object, thereby measuring the overall disease quantity of the region of interest without the necessity of explicitly segmenting individual lesions. TMAn is calculated with object-specific SUV distribution models. In the modeling stage, SUV models are constructed from a set of PET/CT images obtained from normal subjects with manually delineated masks of target objects. Two ways of SUV modeling are explored, where the mean of mean values of the modeling samples is utilized as a consistent normality value in the hard strategy, and the likelihood representing normal tissue is determined from the SUV distribution (histogram) for each SUV value in the fuzzy strategy. The evaluation experiments are conducted on a separate clinical dataset of normal subjects and a phantom dataset with lesions. The ratio of absolute TLB to TMA is taken as the metric, alleviating the individual difference of volume sizes and uptake levels. The results show that the ratios in normal objects are close to 0 and the ratios for lesions approach 1, demonstrating that contributions on TLB are minimal from the normal tissue and mainly from the lesion tissue.
Measurement of body composition, including multiple types of adipose tissue, skeletal tissue, and skeletal muscle, on computed tomography (CT) images is practical given the powerful anatomical structure visualization ability of CT, and is useful for clinical and research applications related to health care and underlying pathology. In recent years, deep learningbased methods have contributed significantly to the development of automatic body composition analysis (BCA). However, the unsatisfactory segmentation performance for indistinguishable boundaries of multiple body composition tissues and the need for large-scale datasets for training deep neural networks still need to be addressed. This paper proposes a deep learning-based approach, called Geographic Attention Network (GA-Net), for body composition tissue segmentation on body torso positron emission tomography/computed tomography (PET/CT) images which leverages the body area information. The representation ability of GA-Net is significantly enhanced with the body area information as it strongly correlates with the target body composition tissue. This method achieves precise segmentation performance for multiple body composition tissues, especially for boundaries that are hard to distinguish, and effectively reduces the data requirements for training the network. We evaluate the proposed model on a dataset that includes 50 body torso PET/CT scans for segmenting 4 key bodily tissuessubcutaneous adipose tissue (SAT), visceral adipose tissue (VAT), skeletal muscle tissue (SMT), and skeleton (Sk). Experiments show that our proposed method increases segmentation accuracy, especially with a limited training dataset, by providing geographic information of target body composition tissues.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.