Interventional applications of photoacoustic imaging typically require visualization of point-like targets, such as the small, circular, cross-sectional tips of needles, catheters, or brachytherapy seeds. When these point-like targets are imaged in the presence of highly echogenic structures, the resulting photoacoustic wave creates a reflection artifact that may appear as a true signal. We propose to use deep learning techniques to identify these types of noise artifacts for removal in experimental photoacoustic data. To achieve this goal, a convolutional neural network (CNN) was first trained to locate and classify sources and artifacts in pre-beamformed data simulated with -Wave. Simulations initially contained one source and one artifact with various medium sound speeds and 2-D target locations. Based on 3,468 test images, we achieved a 100% success rate in classifying both sources and artifacts. After adding noise to assess potential performance in more realistic imaging environments, we achieved at least 98% success rates for channel signal-to-noise ratios (SNRs) of -9dB or greater, with a severe decrease in performance below -21dB channel SNR. We then explored training with multiple sources and two types of acoustic receivers and achieved similar success with detecting point sources. Networks trained with simulated data were then transferred to experimental waterbath and phantom data with 100% and 96.67% source classification accuracy, respectively (particularly when networks were tested at depths that were included during training). The corresponding mean ± one standard deviation of the point source location error was 0.40 ± 0.22 mm and 0.38 ± 0.25 mm for waterbath and phantom experimental data, respectively, which provides some indication of the resolution limits of our new CNN-based imaging system. We finally show that the CNN-based information can be displayed in a novel artifact-free image format, enabling us to effectively remove reflection artifacts from photoacoustic images, which is not possible with traditional geometry-based beamforming.
Medical ultrasound (US) imaging is a non-invasive imaging modality. Smaller and cheaper US systems make US imaging available to more people, leading to a democratization of medical US imaging. The improvements of general processing hardware allow the reconstruction of US images to be done in software. These implementations are known as software beamforming and provide access to the US data earlier in the processing chain. Adaptive beamforming exploits the early access to the full US data with algorithms adapting the processing to the data. Adaptive beamforming claims improved image quality. The improved image will potentially result in an improved diagnosis. Adaptive beamformers have seen enormous popularity in the research community with exponential growth in the number of papers published. However, the complexity of the algorithms makes them hard to re-implement, making a thorough comparison of the algorithms difficult. The UltraSound ToolBox (USTB https://www.USTB.no) is an open source processing framework facilitating the comparison of imaging techniques and the dissemination of research results. The USTB, including the implementation of several state-of-the-art adaptive beamformers, has partly been developed in this thesis and used to produce most of the results presented. The results show that some of the contrast improvements reported in the literature turn out to be from secondary effects of adaptive processing. More specifically, we show that many state-of-the-art algorithms alter the dynamic range. These dynamic range alterations are invalidating the conventional contrast metrics. Said differently; many adaptive algorithms are so flexible that they instead of improving the image quality are merely optimizing the metrics used to evaluate the image quality. We suggest a dynamic range test, compromising data, and code, to assess whether an algorithm alters the dynamic range. A thorough review of the contrast metrics used in US imaging shows there is no consensus on the metrics used in the research literature. Therefore, our introduction of the generalized contrast to noise ratio (GCNR) is essential since this is a contrast metric immune to dynamic range alterations. The GCNR is a remedy for the curse of the metric breaking abilities of software beamforming. Software beamforming also has its blessings. The flexible implementations made possible by software beamforming does lead to improved image quality. The improved resolution of the minimum variance adaptive beamformer does lead to enhanced visualization of the interventricular septum in the human heart. The ability to do beamforming in software allows the implementation of the full reconstruction chain from raw data to the final rendered images on an iPhone. As well as the results presented in the published papers, this thesis does a thorough review of the software beamforming processing chain as implemented in the USTB.
Our nationwide network of BME women faculty collectively argue that racial funding disparity by the National Institutes of Health (NIH) remains the most insidious barrier to success of Black faculty in our profession. We thus refocus attention on this critical barrier and suggest solutions on how it can be dismantled.
Cardiac interventional procedures are often performed under fluoroscopic guidance, exposing both the patient and operators to ionizing radiation. To reduce this risk of radiation exposure, we are exploring the use of photoacoustic imaging paired with robotic visual servoing for cardiac catheter visualization and surgical guidance. A cardiac catheterization procedure was performed on two in vivo swine after inserting an optical fiber into the cardiac catheter to produce photoacoustic signals from the tip of the fiber-catheter pair. A combination of photoacoustic imaging and robotic visual servoing was employed to visualize and maintain constant sight of the catheter tip in order to guide the catheter through the femoral or jugular vein, toward the heart. Fluoroscopy provided initial ground truth estimates for 1D validation of the catheter tip positions, and these estimates were refined using a 3D electromagnetic-based cardiac mapping system as the ground truth. The 1D and 3D root mean square errors ranged 0.25-2.28 mm and 1.24-1.54 mm, respectively. The catheter tip was additionally visualized at three locations within the heart: (1) inside the right atrium, (2) in contact with the right ventricular outflow tract, and (3) inside the right ventricle. Lasered regions of cardiac tissue were resected for histopathological analysis, which revealed no laser-related tissue damage, despite the use of 2.98 mJ per pulse at the fiber tip (379.2 mJ/cm 2 fluence). In addition, there was a 19 dB difference in photoacoustic signal contrast when visualizing the catheter tip
A fundamental, unsolved vision problem is to distinguish image intensity variations caused by surface normal variations from those caused by reflectance changes-ie, to tell shading from paint. A solution to this problem is necessary for machines to interpret images as people do and could have many applications. The labelling allows us to reconstruct bandpassed images containing only those parts of the input image caused by shading effects, and a separate image containing only those parts caused by reflectance changes. The resulting classifications compare well with human psychophysical performance on a test set of images, and show good results for test photographs.This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Abstract A fundamental, unsolved vision problem is to distinguish image intensity v ariations caused by surface normal variations from those caused by re ectance changes ie, to tell shading from paint. A solution to this problem is necessary for machines to interpret images as people do and could have many applications.We take a learning-based approach. We generate a training set of synthetic images containing both shading and re ectance variations. We label the interpretations by indicating which coe cients in a steerable pyramid representation of the image were caused by shading and which b y paint.To analyze local image evidence for shading or re ectance, we study the outputs of two l a y ers of lters, each followed by recti cation. We t a probability density model to the lter outputs using a mixture of factor analyzers. The resulting model indicates the probability, based on local image evidence, that a pyramid coe cient a t a n y orientation and scale was caused by shading or by re ectance variations. We take the lighting direction to be that which generates the most shape-like labelling.The labelling allows us to reconstruct bandpassed images containing only those parts of the input image caused by shading e ects, and a separate image containing only those parts caused by re ectance changes. The resulting classi cations compare well with human psychophysical performance on a test set of images, and show good results for test photographs.This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonpro t educational and research purposes provided that all such whol...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.