With traditional beamforming methods, ultrasound B-mode images contain speckle noise caused by the random interference of subresolution scatterers. In this paper, we present a framework for using neural networks to beamform ultrasound channel signals into speckle-reduced B-mode images. We introduce log-domain normalization-independent loss functions that are appropriate for ultrasound imaging. A fully convolutional neural network was trained with simulated channel signals that were co-registered spatially to ground truth maps of echogenicity. Networks were designed to accept 16 beamformed subaperture radiofrequency signals. Training performance was compared as a function of training objective, network depth, and network width. The networks were then evaluated on simulation, phantom, and in vivo data and compared against existing speckle reduction techniques. The most effective configuration was found to be the deepest (16 layer) and widest (32 filter) networks, trained to minimize a normalization-independent mixture of the ℓ 1 and multi-scale structural similarity losses. The neural network significantly outperformed delay-and-sum and receive-only spatial compounding in speckle reduction while preserving resolution and exhibited improved detail preservation over a non-local means methods. This work demonstrates that ultrasound B-mode image reconstruction using machine-learned neural networks is feasible and establishes that networks trained solely in silico can be generalized to real-world imaging in vivo to produce images with significantly reduced speckle.
Simulations of acoustic wave propagation, including both the forward and the backward propagations of the wave (also known as full-wave simulations), are increasingly utilized in ultrasound imaging due to their ability to more accurately model important acoustic phenomena. Realistic anatomic models, particularly those of the abdominal wall, are needed to take full advantage of the capabilities of these simulation tools. We describe a method for converting fat-water-separated magnetic resonance imaging (MRI) volumes to anatomical models for ultrasound simulations. These acoustic models are used to map acoustic imaging parameters, such as speed of sound and density, to grid points in an ultrasound simulation. The tissues of these models are segmented from the MRI volumes into five primary classes of tissue in the human abdominal wall (skin, fat, muscle, connective tissue, and nontissue). This segmentation is achieved using an unsupervised machine learning algorithm, fuzzy c-means clustering (FCM), on a multiscale feature representation of the MRI volumes. We describe an automated method for utilizing FCM weights to produce a model that achieves ∼90% agreement with manual segmentation. Two-dimensional (2-D) and three-dimensional (3-D) full-wave nonlinear ultrasound simulations are conducted, demonstrating the utility of realistic 3-D abdominal wall models over previously available 2-D abdominal wall models.
Deep neural networks (DNNs) have recently emerged as powerful function approximators that can learn to transform given inputs into desired outputs when provided with enough training samples. Ultrasound B-mode images are traditionally formed using the delay-and-sum beamformer to display the echogenicity (i.e., backscattering strength) of the medium. Advanced beamformers are often designed to improve indirect metrics of image quality such as lesion detectability and point target resolution. Here, we describe how DNNs can be trained directly to minimize errors in the output images as compared to the ground truth targets and present recent work with DNNs for two specific applications. A simple convolutional DNN was trained to accurately estimate echogenicity for B-mode imaging using Field II simulations; the DNN produced speckle-reduced B-mode images of data acquired in simulations, phantoms, and in vivo. Similarly, a similar DNN was trained to detect targeted microbubbles for ultrasound molecular imaging; the DNN nondestructively detected microbubbles with similar performance to a state-of-the-art destructive imaging technique while enabling real-time molecular imaging.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.