Quantifying parenchymal tissue changes in the lungs is imperative in furthering the study of radiation induced lung damage (RILD). Registering lung images from different time-points is a key step of this process. Traditional intensity-based registration approaches fail this task due to the considerable anatomical changes that occur between timepoints. This work proposes a novel method to successfully register longitudinal pre- and post-radiotherapy (RT) lung computed tomography (CT) scans that exhibit large changes due to RILD, by extracting consistent anatomical features from CT (lung boundaries, main airways, vessels) and using these features to optimise the registrations. Pre-RT and 12 month post-RT CT pairs from fifteen lung cancer patients were used for this study, all with varying degrees of RILD, ranging from mild parenchymal change to extensive consolidation and collapse. For each CT, signed distance transforms from segmentations of the lungs and main airways were generated, and the Frangi vesselness map was calculated. These were concatenated into multi-channel images and diffeomorphic multichannel registration was performed for each image pair using NiftyReg. Traditional intensity-based registrations were also performed for comparison purposes. For the evaluation, the pre- and post-registration landmark distance was calculated for all patients, using an average of 44 manually identified landmark pairs per patient. The mean (standard deviation) distance for all datasets decreased from 15.95 (8.09) mm pre-registration to 4.56 (5.70) mm post-registration, compared to 7.90 (8.97) mm for the intensity-based registrations. Qualitative improvements in image alignment were observed for all patient datasets. For four representative subjects, registrations were performed for three additional follow-up timepoints up to 48 months post-RT and similar accuracy was achieved. We have demonstrated that our novel multichannel registration method can successfully align longitudinal scans from RILD patients in the presence of large anatomical changes such as consolidation and atelectasis, outperforming the traditional registration approach both quantitatively and through thorough visual inspection.
We present a novel classification system of the parenchymal features of radiation-induced lung damage (RILD). We developed a deep learning network to automate the delineation of five classes of parenchymal textures. We quantify the volumetric change in classes after radiotherapy in order to allow detailed, quantitative descriptions of the evolution of lung parenchyma up to 24 months after RT, and correlate these with radiotherapy dose and respiratory outcomes. Diagnostic CTs were available pre-RT, and at 3, 6, 12 and 24 months post-RT, for 46 subjects enrolled in a clinical trial of chemoradiotherapy for non-small cell lung cancer. All 230 CT scans were segmented using our network. The five parenchymal classes showed distinct temporal patterns. Moderate correlation was seen between change in tissue class volume and clinical and dosimetric parameters, e.g., the Pearson correlation coefficient was ≤0.49 between V30 and change in Class 2, and was 0.39 between change in Class 1 and decline in FVC. The effect of the local dose on tissue class revealed a strong dose-dependent relationship. Respiratory function measured by spirometry and MRC dyspnoea scores after radiotherapy correlated with the measured radiological RILD. We demonstrate the potential of using our approach to analyse and understand the morphological and functional evolution of RILD in greater detail than previously possible.
Radiation-induced lung damage (RILD) is a common side effect of radiotherapy (RT). The ability to automatically segment, classify, and quantify different types of lung parenchymal change is essential to uncover underlying patterns of RILD and their evolution over time. A RILD dedicated tissue classification system was developed to describe lung parenchymal tissue changes on a voxel-wise level. The classification system was automated for segmentation of five lung tissue classes on computed tomography (CT) scans that described incrementally increasing tissue density, ranging from normal lung (Class 1) to consolidation (Class 5). For ground truth data generation, we employed a two-stage data annotation approach, akin to active learning. Manual segmentation was used to train a stage one auto-segmentation method. These results were manually refined and used to train the stage two auto-segmentation algorithm. The stage two auto-segmentation algorithm was an ensemble of six 2D Unets using different loss functions and numbers of input channels. The development dataset used in this study consisted of 40 cases, each with a pre-radiotherapy, 3-, 6-, 12-, and 24-month follow-up CT scans (n = 200 CT scans). The method was assessed on a hold-out test dataset of 6 cases (n = 30 CT scans). The global Dice score coefficients (DSC) achieved for each tissue class were: Class (1) 99% and 98%, Class (2) 71% and 44%, Class (3) 56% and 26%, Class (4) 79% and 47%, and Class (5) 96% and 92%, for development and test subsets, respectively. The lowest values for the test subsets were caused by imaging artefacts or reflected subgroups that occurred infrequently and with smaller overall parenchymal volumes. We performed qualitative evaluation on the test dataset presenting manual and auto-segmentation to a blinded independent radiologist to rate them as ‘acceptable’, ‘minor disagreement’ or ‘major disagreement’. The auto-segmentation ratings were similar to the manual segmentation, both having approximately 90% of cases rated as acceptable. The proposed framework for auto-segmentation of different lung tissue classes produces acceptable results in the majority of cases and has the potential to facilitate future large studies of RILD.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.