Optical coherence tomography (OCT) images of the retina are a powerful tool for diagnosing and monitoring eye disease. However, they are plagued by speckle noise, which reduces image quality and reliability of assessment. This paper introduces a novel speckle reduction method inspired by the recent successes of deep learning in medical imaging. We present two versions of the network to reflect the needs and preferences of different end-users. Specifically, we train a convolution neural network to denoise cross-sections from OCT volumes of healthy eyes using either (1) mean-squared error, or (2) a generative adversarial network (GAN) with Wasserstein distance and perceptual similarity. We then interrogate the success of both methods with extensive quantitative and qualitative metrics on cross-sections from both healthy and glaucomatous eyes. The results show that the former approach provides state-of-the-art improvement in quantitative metrics such as PSNR and SSIM, and aids layer segmentation. However, the latter approach, which puts more weight on visual perception, outperformed for qualitative comparisons based on accuracy, clarity, and personal preference. Overall, our results demonstrate the effectiveness and efficiency of a deep learning approach to denoising OCT images, while maintaining subtle details in the images.
Tree-like structures, such as blood vessels, often express complexity at very fine scales, requiring high-resolution grids to adequately describe their shape. Such sparse morphology can alternately be represented by locations of centreline points, but learning from this type of data with deep learning is challenging due to it being unordered, and permutation invariant. In this work, we propose a deep neural network that directly consumes unordered points along the centreline of a branching structure, to identify the topology of the represented structure in a single-shot. Key to our approach is the use of a novel multi-task loss function, enabling instance segmentation of arbitrarily complex branching structures. We train the network solely using synthetically generated data, utilizing domain randomization to facilitate the transfer to real 2D and 3D data. Results show that our network can reliably extract meaningful information about branch locations, bifurcations and endpoints, and sets a new benchmark for semantic instance segmentation in branching structures.
Virtual Fractional Flow Reserve (vFFR) is an emerging technology that assesses the severity of coronary stenosis by means of patient specific of Computational Fluid Dynamics simulations. To be of practical clinical utility within a catheter laboratory, FFR results must be obtainable within minutes to guide intervention. We present the design of a novel Lattice-Boltzmann method code specifically tailored for fully automatic near real-time 3D coronary blood flow simulations.The key contributions of the work include a hybrid multicore-GPU accelerated sparse lattice generation algorithm and specialized 3D-0D coupled hemodynamics solver. We present results on state of the art GPU hardware, simulating hemodynamics within multi segment coronary tree. The results demonstrate that vFFR simulations can be performed in the order of minutes, making the replacement of pressure wire based FFR in a catheter laboratory setting with vFFR simulations feasible, without the need to reduce the fidelity of the hemodynamics modelling.
Virtual fractional flow reserve (vFFR) is an emerging technology employing patient-specific computational fluid dynamics (CFD) simulations to infer the hemodynamic significance of a coronary stenosis. Patient-specific boundary conditions are an important aspect of this approach and while most efforts make use of lumped parameter models to capture key phenomena, they lack the ability to specify the associated parameters on a patient-specific basis. When applying vFFR in a catheter laboratory setting using X-ray angiograms as the basis for creating the simulations, there is some indirect functional information available through the observation of the radio-opaque contrast agent motion. In this work, we present a novel method for tuning the lumped parameter arterial resistances (commonly incorporated in such simulations), based on simulating the physics of the contrast motion and comparing the observed and simulated arrival times of the contrast front at key points within a coronary tree. We present proof of principle results on a synthetically generated coronary tree comprised of multiple segments, demonstrating that the method can successfully optimize the arterial resistances to reconstruct the underlying velocity and pressure fields, providing a potential new means to improve the patient specificity of simulation-based technologies in this area.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.