Purpose: Needle-based procedures for diagnosing and treating prostate cancer, such as biopsy and brachytherapy, have incorporated three-dimensional (3D) transrectal ultrasound (TRUS) imaging to improve needle guidance. Using these images effectively typically requires the physician to manually segment the prostate to define the margins used for accurate registration, targeting, and other guidance techniques. However, manual prostate segmentation is a time-consuming and difficult intraoperative process, often occurring while the patient is under sedation (biopsy) or anesthetic (brachytherapy). Minimizing procedure time with a 3D TRUS prostate segmentation method could provide physicians with a quick and accurate prostate segmentation, and allow for an efficient workflow with improved patient throughput to enable faster patient access to care. The purpose of this study was to develop a supervised deep learning-based method to segment the prostate in 3D TRUS images from different facilities, generated using multiple acquisition methods and commercial ultrasound machine models to create a generalizable algorithm for needle-based prostate cancer procedures. Methods: Our proposed method for 3D segmentation involved prediction on two-dimensional (2D) slices sampled radially around the approximate central axis of the prostate, followed by reconstruction into a 3D surface. A 2D U-Net was modified, trained, and validated using images from 84 endfire and 122 side-fire 3D TRUS images acquired during clinical biopsies and brachytherapy procedures. Modifications to the expansion section of the standard U-Net included the addition of 50% dropouts and the use of transpose convolutions instead of standard upsampling followed by convolution to reduce overfitting and improve performance, respectively. Manual contours provided the annotations needed for the training, validation, and testing datasets, with the testing dataset consisting of 20 end-fire and 20 side-fire unseen 3D TRUS images. Since predicting with 2D images has the potential to lose spatial and structural information, comparisons to 3D reconstruction and optimized 3D networks including 3D V-Net, Dense V-Net, and High-resolution 3D-Net were performed following an investigation into different loss functions. An extended selection of absolute and signed error metrics were computed, including pixel map comparisons [dice similarity coefficient (DSC), recall, and precision], volume percent differences (VPD), mean surface distance (MSD), and Hausdorff distance (HD), to assess 3D segmentation accuracy. Results: Overall, our proposed reconstructed modified U-Net performed with a median [first quartile, third quartile] absolute DSC, recall, precision, VPD, MSD, and HD of 94.1 [92.6, 94.9]%, 96.0 [93.1, 98.5]%, 93.2 [88.8, 95.4]%, 5.78 [2.49, 11.50]%, 0.89 [0.73, 1.09] mm, and 2.89 [2.37, 4.35] mm, respectively. When compared to the best-performing optimized 3D network (i.e., 3D V-Net with a Dice plus cross-entropy loss function), our proposed method performed with a significant ...
Three-dimensional (3D) transrectal ultrasound (TRUS) is utilized in prostate cancer diagnosis and treatment, necessitating time-consuming manual prostate segmentation. We have previously developed an automatic 3D prostate segmentation algorithm involving deep learning prediction on radially sampled 2D images followed by 3D reconstruction, trained on a large, clinically diverse dataset with variable image quality. As large clinical datasets are rare, widespread adoption of automatic segmentation could be facilitated with efficient 2D-based approaches and the development of an image quality grading method. The complete training dataset of 6761 2D images, resliced from 206 3D TRUS volumes acquired using end-fire and side-fire acquisition methods, was split to train two separate networks using either end-fire or side-fire images. Split datasets were reduced to 1000, 500, 250, and 100 2D images. For deep learning prediction, modified U-Net and U-Net++ architectures were implemented and compared using an unseen test dataset of 40 3D TRUS volumes. A 3D TRUS image quality grading scale with three factors (acquisition quality, artifact severity, and boundary visibility) was developed to assess the impact on segmentation performance. For the complete training dataset, U-Net and U-Net++ networks demonstrated equivalent performance, but when trained using split end-fire/side-fire datasets, U-Net++ significantly outperformed the U-Net. Compared to the complete training datasets, U-Net++ trained using reduced-size end-fire and side-fire datasets demonstrated equivalent performance down to 500 training images. For this dataset, image quality had no impact on segmentation performance for end-fire images but did have a significant effect for side-fire images, with boundary visibility having the largest impact. Our algorithm provided fast (<1.5s) and accurate 3D segmentations across clinically diverse images, demonstrating generalizability and efficiency when employed on smaller datasets, supporting the potential for widespread use, even when data is scarce. The development of an image quality grading scale provides a quantitative tool for assessing segmentation performance.
Background Mammographic screening has reduced mortality in women through the early detection of breast cancer. However, the sensitivity for breast cancer detection is significantly reduced in women with dense breasts, in addition to being an independent risk factor. Ultrasound (US) has been proven effective in detecting small, early‐stage, and invasive cancers in women with dense breasts. Purpose To develop an alternative, versatile, and cost‐effective spatially tracked three‐dimensional (3D) US system for whole‐breast imaging. This paper describes the design, development, and validation of the spatially tracked 3DUS system, including its components for spatial tracking, multi‐image registration and fusion, feasibility for whole‐breast 3DUS imaging and multi‐planar visualization in tissue‐mimicking phantoms, and a proof‐of‐concept healthy volunteer study. Methods The spatially tracked 3DUS system contains (a) a six‐axis manipulator and counterbalanced stabilizer, (b) an in‐house quick‐release 3DUS scanner, adaptable to any commercially available US system, and removable, allowing for handheld 3DUS acquisition and two‐dimensional US imaging, and (c) custom software for 3D tracking, 3DUS reconstruction, visualization, and spatial‐based multi‐image registration and fusion of 3DUS images for whole‐breast imaging. Spatial tracking of the 3D position and orientation of the system and its joints (J1–6) were evaluated in a clinically accessible workspace for bedside point‐of‐care (POC) imaging. Multi‐image registration and fusion of acquired 3DUS images were assessed with a quadrants‐based protocol in tissue‐mimicking phantoms and the target registration error (TRE) was quantified. Whole‐breast 3DUS imaging and multi‐planar visualization were evaluated with a tissue‐mimicking breast phantom. Feasibility for spatially tracked whole‐breast 3DUS imaging was assessed in a proof‐of‐concept healthy male and female volunteer study. Results Mean tracking errors were 0.87 ± 0.52, 0.70 ± 0.46, 0.53 ± 0.48, 0.34 ± 0.32, 0.43 ± 0.28, and 0.78 ± 0.54 mm for joints J1–6, respectively. Lookup table (LUT) corrections minimized the error in joints J1, J2, and J5. Compound motions exercising all joints simultaneously resulted in a mean tracking error of 1.08 ± 0.88 mm (N = 20) within the overall workspace for bedside 3DUS imaging. Multi‐image registration and fusion of two acquired 3DUS images resulted in a mean TRE of 1.28 ± 0.10 mm. Whole‐breast 3DUS imaging and multi‐planar visualization in axial, sagittal, and coronal views were demonstrated with the tissue‐mimicking breast phantom. The feasibility of the whole‐breast 3DUS approach was demonstrated in healthy male and female volunteers. In the male volunteer, the high‐resolution whole‐breast 3DUS acquisition protocol was optimized without the added complexities of curvature and tissue deformations. With small post‐acquisition corrections for motion, whole‐breast 3DUS imaging was performed on the healthy female volunteer showing relevant anatomical structures and details. Conc...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.