Recently, several efforts have been made to develop the deep learning (DL) algorithms for automatic detection and segmentation of brain metastases (BM). In this study, we developed an advanced DL model to BM detection and segmentation, especially for small-volume BM. From the institutional cancer registry, contrast-enhanced magnetic resonance images of 65 patients and 603 BM were collected to train and evaluate our DL model. Of the 65 patients, 12 patients with 58 BM were assigned to test-set for performance evaluation. Ground-truth for BM was assigned to one radiation oncologist to manually delineate BM and another one to cross-check. Unlike other previous studies, our study dealt with relatively small BM, so the area occupied by the BM in the high-resolution images were small. Our study applied training techniques such as the overlapping patch technique and 2.5-dimensional (2.5D) training to the well-known U-Net architecture to learn better in smaller BM. As a DL architecture, 2D U-Net was utilized by 2.5D training. For better efficacy and accuracy of a two-dimensional U-Net, we applied effective preprocessing include 2.5D overlapping patch technique. The sensitivity and average false positive rate were measured as detection performance, and their values were 97% and 1.25 per patient, respectively. The dice coefficient with dilation and 95% Hausdorff distance were measured as segmentation performance, and their values were 75% and 2.057 mm, respectively. Our DL model can detect and segment BM with small volume with good performance. Our model provides considerable benefit for clinicians with automatic detection and segmentation of BM for stereotactic ablative radiotherapy.
This work attempted to construct a new metal artifact reduction (MAR) framework in kilo-voltage (kV) computed tomography (CT) images by combining (1) deep learning and (2) multi-modal imaging, defined as MARTIAN (Metal Artifact Reduction throughout Two-step sequentIAl deep convolutional neural Networks). Most CNNs under supervised learning require artifact-free images to artifact-contaminated images for artifact correction. Mega-voltage (MV) CT is insensitive to metal artifacts, unlike kV CT due to different physical characteristics, which can facilitate the generation of artifact-free synthetic kV CT images throughout the first network (Network 1). The pairs of true kV CT and artifact-free kV CT images after post-processing constructed a subsequent network (Network 2) to conduct the actual MAR process. The proposed framework was implemented by GAN from 90 scans for head-and-neck and brain radiotherapy and validated with 10 independent cases against commercial MAR software. The artifact-free kV CT images following Network 1 and post-processing led to structural similarity (SSIM) of 0.997, and mean-absolute-error (MAE) of 10.2 HU, relative to true kV CT. Network 2 in charge of actual MAR successfully suppressed metal artifacts, relative to commercial MAR, while retaining the detailed imaging information, yielding the SSIM of 0.995 against 0.997 from the commercial MAR.
Deep convolutional neural network (CNN) helped enhance image quality of cone-beam computed tomography (CBCT) by generating synthetic CT. Most of the previous works, however, trained network by intensity-based loss functions, possibly undermining to promote image feature similarity. The verifications were not sufficient to demonstrate clinical applicability, either. This work investigated the effect of variable loss functions combining feature- and intensity-driven losses in synthetic CT generation, followed by strengthening the verification of generated images in both image similarity and dosimetry accuracy. The proposed strategy highlighted the feature-driven quantification in (1) training the network by perceptual loss, besides L1 and structural similarity (SSIM) losses regarding anatomical similarity, and (2) evaluating image similarity by feature mapping ratio (FMR), besides conventional metrics. In addition, the synthetic CT images were assessed in terms of dose calculating accuracy by a commercial Monte-Carlo algorithm. The network was trained with 50 paired CBCT-CT scans acquired at the same CT simulator and treatment unit to constrain environmental factors any other than loss functions. For 10 independent cases, incorporating perceptual loss into L1 and SSIM losses outperformed the other combinations, which enhanced FMR of image similarity by 10%, and the dose calculating accuracy by 1–2% of gamma passing rate in 1%/1mm criterion.
PurposeHeart toxicity, such as major acute coronary events (ACE), following breast radiation therapy (RT) is of utmost concern. Thus, many studies have been investigating the effect of mean heart dose (MHD) and dose received in heart sub‐structures on toxicity. Most studies focused on the dose thresholds in the heart and its sub‐structures, while few studies adopted such computational methods as deep neural networks (DNN) and radiomics. This work aims to construct a feature‐driven predictive model for ACE after breast RT.MethodsA recently proposed two‐step predictive model that extracts a number of features from a deep auto‐segmentation network and processes the selected features for prediction was adopted. This work refined the auto‐segmenting network and feature processing algorithms to enhance performance in cardiac toxicity prediction. In the predictive model, the deep convolutional neural network (CNN) extracted features from 3D computed tomography (CT) images and dose distributions in three automatically segmented heart sub‐structures, including the left anterior descending artery (LAD), right coronary artery (RCA), and left ventricle (LV). The optimal feature processing workflow for the extracted features was explored to enhance the prediction accuracy. The regions associated with toxicity were visualized using a class activation map (CAM)‐based technique. Our proposed model was validated against a conventional DNN (convolutional and fully connected layers) and radiomics with a patient cohort of 84 cases, including 29 and 55 patient cases with and without ACE. Of the entire 84 cases, 12 randomly chosen cases (5 toxicity and 7 non‐toxicity cases) were set aside for independent test, and the remaining 72 cases were applied to 4‐fold stratified cross‐validation.ResultsOur predictive model outperformed the conventional DNN by 38% and 10% and radiomics‐based predictive models by 9% and 10% in AUC for 4‐fold cross‐validations and independent test, respectively. The degree of enhancement was greater when incorporating dose information and heart sub‐structures into feature extraction. The model whose inputs were CT, dose, and three sub‐structures (LV, LAD, and RCA) reached 96% prediction accuracy on average and 0.94 area under the curve (AUC) on average in the cross‐validation, and also achieved prediction accuracy of 83% and AUC of 0.83 in the independent test. On 10 correctly predicted cases out of 12 for the independent test, the activation maps implied that for cases of ACE toxicity, the higher intensity was more likely to be observed inside the LV.ConclusionsThe proposed model characterized by modifications in model input with dose distributions and cardiac sub‐structures, and serial processing of feature extraction and feature selection techniques can improve the predictive performance in ACE following breast RT.
PURPOSE/OBJECTIVES Stereotactic ablative radiotherapy (SABR) is becoming popular in the treatment of Brain metastases (BM). However, detection and manual delineation of BM are labor-intense processes, and normal blood vessels are occasionally mistaken for metastatic lesions. In this study, we investigated the efficacy and accuracy of a deep learning (DL) model for the detection and segmentation of BM using black-blood (BB) magnetic resonance imaging (MRI). MATERIALS/METHODS The BB MRI data of 48 patients with 806 BMs were collected to train and validate the DL model. Since MRI data have an inconsistent intensity scale across the patients, we applied piecewise linear histogram matching algorithm, also called the Nyul normalization. To deal with bias corruption in the patient, N4 bias field correction was applied. The modified U-Net was implemented to automatically detect and segment BMs. The detection performance was measured with sensitivity and average false positives and the segmentation performance was measured with the dice score (DSC). RESULTS Twelve patients with 132 BMs were randomly selected as the test-set for evaluating our trained model. In the test-set, 19.2% BMs had a volume of < 0.02 cc, max volume of 16.642 cc, min volume of 0.009 cc, median volume of 0.071 cc, and mean volume of 1.158 cc. The sensitivity was 96.87% and average false positives were 0.2. We consider < 0.02 cc volume as the small volume. 100% sensitivity for BMs with volumes ≥ 0.02 cc, and 84.6% sensitivity for BMs with volumes < 0.02 cc volume. The DSC was 86.27 (range, 77-91.2). CONCLUSIONS Our model can detect and segment BMs on BB MRI data with good detection and segmentation performance. Further study is on a way to generate a synthetic BB image from the T1-Gd image by applying this model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.