Ultrafast imaging based on coherent plane-wave compounding is one of the most important recent developments in medical ultrasound. It significantly improves the image quality and allows for much faster image acquisition. This technique, however, requires large computational load motivating methods for sampling and processing rate reduction. In this work, we extend the recently proposed frequency-domain beamforming (FDBF) framework to plane-wave imaging. Beamforming in frequency yields the same image quality while using fewer samples. It achieves at least fourfold sampling and processing rate reduction by avoiding oversampling required by standard processing. To further reduce the rate, we exploit the structure of the beamformed signal and use compressed sensing methods to recover the beamformed signal from its partial frequency data obtained at a sub-Nyquist rate. Our approach obtains tenfold rate reduction compared with standard time-domain processing. We verify performance in terms of spatial resolution and contrast based on the scans of a tissue mimicking the phantom obtained by a commercial Aixplorer system. In addition, in vivo carotid and thyroid scans processed using standard beamforming and FDBF are presented for qualitative evaluation and visual comparison. Finally, we demonstrate the use of FDBF for shear-wave elastography by generating velocity maps from the beamformed data processed at sub-Nyquist rates.
Ultrafast imaging based on coherent plane-wave compounding is one of the most important recent developments in medical ultrasound. It significantly improves image quality and allows for much faster image acquisition. This method, however, incurs severe computational loads that create a major bottleneck in its implementation using existing commercial systems. To overcome this limitation we translate the beamforming, which is the basic processing step, to the frequency domain. As a result the computations can be carried out much more efficiently and using less data samples. To this end, we extend the frequency domain beamforming (FDBF) framework developed recently for the focused imaging mode to plane-wave imaging. We show that the core of FDBF, the relationship between the beam and the detected signals in the frequency domain, holds and can be implemented efficiently by introducing an appropriate approximation. We also show that dynamic aperture and apodization, crucial for image quality improvement, can be applied directly in frequency as a part of FDBF. The translation of beamforming into the frequency domain allows for data rate reduction by eliminating oversampling, required by digital implementation of beamforming in time. As a result the signals are sampled and processed at their effective Nyquist rate, leading to a 4-fold reduction in the number of samples.
<p>Clouds are a key factor in Earth's energy budget and thus significantly affect climate and weather predictions. These effects are dominated by shallow warm clouds (shown by Sherwood et al., 2014, Zelinka et al., 2020) which tend to be small and heterogenous. Therefore, remote sensing of clouds and three-dimensional (3D) volumetric reconstruction of their internal properties are of significant importance.</p><p>Recovery of the volumetric information of the clouds relies on 3D radiative transfer, that models 3D multiple scattering. This model is complex and nonlinear. Thus, inverting the model poses a major challenge and typically requires using a simplification. A common relaxation assumes that clouds are horizontally uniform and infinitely broad, leading to one-dimensional modeling. However, generally this assumption is invalid since clouds are naturally highly heterogeneous. A novel alternative is to perform cloud retrieval by developing tools of 3D scattering tomography. Then, multiple satellite images of the clouds are acquired from different points of view. For example, simultaneous multi-view radiometric images of clouds are proposed by the CloudCT project, funded by the ERC. Unfortunately, 3D scattering tomography require high computational resources. This results, in practice, in slow run times and prevents large scale analysis. Moreover, existing scattering tomography is based on iterative optimization, which is sensitive to initialization.</p><p>In this work we introduce a deep neural network for 3D volumetric reconstruction of clouds. In recent years, supervised learning using deep neural networks has led to remarkable results in various fields ranging from computer vision to medical imaging. However, these deep learning techniques have not been extensively studied in the context of volumetric atmospheric science and specifically cloud research.</p><p>We present a convolutional neural network (CNN) whose architecture is inspired by the physical nature of clouds. Due to the lack of real-world datasets, we train the network in a supervised manner using a physics-based simulator that generates realistic volumetric cloud fields. In addition, we propose a hybrid approach, which combines the proposed neural network with an iterative physics-based optimization technique.</p><p>We demonstrate the recovery performance of our proposed method in cloud fields. In a single cloud-scale, our resulting quality is comparable to state-of-the-art methods, while run time improves by orders of magnitude. In contrast to existing physics-based methods, our network offers scalability, which enables the reconstruction of wider cloud fields. Finally, we show that the hybrid approach leads to improved retrieval in a fast process.</p>
We present 3DeepCT, a deep neural network for computed tomography, which performs 3D reconstruction of scattering volumes from multi-view images. Our architecture is dictated by the stationary nature of atmospheric cloud fields. The task of volumetric scattering tomography aims at recovering a volume from its 2D projections. This problem has been studied extensively, leading, to diverse inverse methods based on signal processing and physics models. However, such techniques are typically iterative, exhibiting high computational load and long convergence time. We show that 3DeepCT outperforms physics-based inverse scattering methods in term of accuracy as well as offering a significant orders of magnitude improvement in computational time. To further improve the recovery accuracy, we introduce a hybrid model that combines 3DeepCT and physics-based method. The resultant hybrid technique enjoys fast inference time and improved recovery performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.