Purpose While multi‐parametric magnetic resonance imaging (MRI) shows great promise in assisting with prostate cancer diagnosis and localization, subtle differences in appearance between cancer and normal tissue lead to many false positive and false negative interpretations by radiologists. We sought to automatically detect aggressive cancer (Gleason pattern ≥ 4) and indolent cancer (Gleason pattern 3) on a per‐pixel basis on MRI to facilitate the targeting of aggressive cancer during biopsy. Methods We created the Stanford Prostate Cancer Network (SPCNet), a convolutional neural network model, trained to distinguish between aggressive cancer, indolent cancer, and normal tissue on MRI. Ground truth cancer labels were obtained by registering MRI with whole‐mount digital histopathology images from patients who underwent radical prostatectomy. Before registration, these histopathology images were automatically annotated to show Gleason patterns on a per‐pixel basis. The model was trained on data from 78 patients who underwent radical prostatectomy and 24 patients without prostate cancer. The model was evaluated on a pixel and lesion level in 322 patients, including six patients with normal MRI and no cancer, 23 patients who underwent radical prostatectomy, and 293 patients who underwent biopsy. Moreover, we assessed the ability of our model to detect clinically significant cancer (lesions with an aggressive component) and compared it to the performance of radiologists. Results Our model detected clinically significant lesions with an area under the receiver operator characteristics curve of 0.75 for radical prostatectomy patients and 0.80 for biopsy patients. Moreover, the model detected up to 18% of lesions missed by radiologists, and overall had a sensitivity and specificity that approached that of radiologists in detecting clinically significant cancer. Conclusions Our SPCNet model accurately detected aggressive prostate cancer. Its performance approached that of radiologists, and it helped identify lesions otherwise missed by radiologists. Our model has the potential to assist physicians in specifically targeting the aggressive component of prostate cancers during biopsy or focal treatment.
Purpose: Magnetic resonance imaging (MRI) has great potential to improve prostate cancer diagnosis; however, subtle differences between cancer and confounding conditions render prostate MRI interpretation challenging. The tissue collected from patients who undergo radical prostatectomy provides a unique opportunity to correlate histopathology images of the prostate with preoperative MRI to accurately map the extent of cancer from histopathology images onto MRI. We seek to develop an open-source, easy-to-use platform to align presurgical MRI and histopathology images of resected prostates in patients who underwent radical prostatectomy to create accurate cancer labels on MRI. Methods: Here, we introduce RAdiology Pathology Spatial Open-Source multi-Dimensional Integration (RAPSODI), the first open-source framework for the registration of radiology and pathology images. RAPSODI relies on three steps. First, it creates a three-dimensional (3D) reconstruction of the histopathology specimen as a digital representation of the tissue before gross sectioning. Second, RAPSODI registers corresponding histopathology and MRI slices. Third, the optimized transforms are applied to the cancer regions outlined on the histopathology images to project those labels onto the preoperative MRI. Results: We tested RAPSODI in a phantom study where we simulated various conditions, for example, tissue shrinkage during fixation. Our experiments showed that RAPSODI can reliably correct multiple artifacts. We also evaluated RAPSODI in 157 patients from three institutions that underwent radical prostatectomy and have very different pathology processing and scanning. RAPSODI was evaluated in 907 corresponding histpathology-MRI slices and achieved a Dice coefficient of 0.97 AE 0.01 for the prostate, a Hausdorff distance of 1.99 AE 0.70 mm for the prostate boundary, a urethra deviation of 3.09 AE 1.45 mm, and a landmark deviation of 2.80 AE 0.59 mm between registered histopathology images and MRI. Conclusion: Our robust framework successfully mapped the extent of cancer from histopathology slices onto MRI providing labels from training machine learning methods to detect cancer on MRI.
This paper proposes a simple, accurate, and robust approach to single image blind super-resolution (SR). This task is formulated as a functional to be minimized with respect to both an intermediate super-resolved image and a nonparametric blur-kernel. The proposed method includes a convolution consistency constraint which uses a non-blind learning-based SR result to better guide the estimation process. Another key component is the bi-l 0 -l 2 -norm regularization placed on the super-resolved, sharp image and the blur-kernel, which is shown to be quite beneficial for accurate blur-kernel estimation. The numerical optimization is implemented by coupling the splitting augmented Lagrangian and the conjugate gradient. With the pre-estimated blur-kernel, the final SR image is reconstructed using a simple TV-based non-blind SR method. The new method is demonstrated to achieve better performance than Michaeli and Irani [2] in both terms of the kernel estimation accuracy and image SR quality. conclusion, based on both the empirical and theoretical analysis, is that the influence of an accurate blur-kernel is significantly larger than that of an advanced image prior. Furthermore, [1] shows that "an accurate reconstruction constraint 1 combined with a simple gradient regularization achieves SR results almost as good as those of state-ofthe-art algorithms with sophisticated image priors".Only few works have addressed the estimation of an accurate blur model within the single image SR reconstruction process. Among few such contributions that attempt to estimate the kernel, a parametric model is usually assumed, and the Gaussian is a common choice, e.g., [16,17,36]. However, as the assumption does not coincide with the actual blur model, e.g., combination of out-of-focus and camera shake, we will naturally get low-quality SR results.This paper focuses on the general single image nonparametric blind SR problem. The work reported in [18] is such an example, and actually it does present a nonparametric kernel estimation method for blind SR and blind deblurring in a unified framework. However, it is restricting its treatment to single-mode blur-kernels. In addition, [18] does not originate from a rigorous optimization principle, but rather builds on the detection and prediction of step edges as an important clue for the blur-kernel estimation. Another noteworthy and very relevant work is the one by Michaeli and Irani [2]. They exploit an inherent recurrence property of small natural image patches across different scales, and make use of the MAP k -based estimation procedure [19] for recovering the kernel. Note that, the effectiveness of [2] largely relies on the found nearest neighbors to the query low-res patches in the input blurred, low-res image. We should also note that, in both [18] and [2] an l 2 -norm-based kernel gradient regularization is imposed for promoting kernel smoothness.Surprisingly, in spite of the similarity, it seems there exists a big gap between blind SR and blind image deblurring. The attention given to non...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.