Deep learning (DL) has proved successful in medical imaging and, in the wake of the recent COVID-19 pandemic, some works have started to investigate DLbased solutions for the assisted diagnosis of lung diseases. While existing works focus on CT scans, this paper studies the application of DL techniques for the analysis of lung ultrasonography (LUS) images. Specifically, we present a novel fully-annotated dataset of LUS images collected from several Italian hospitals, with labels indicating the degree of disease severity at a frame-level, videolevel, and pixel-level (segmentation masks). Leveraging these data, we introduce several deep models that address relevant tasks for the automatic analysis of LUS images. In particular, we present a novel deep network, derived from Spatial Transformer Networks, which simultaneously predicts the disease severity score associated to a input frame and provides localization of pathological artefacts in a weakly-supervised way. Furthermore, we introduce a new method based on uninorms for effective frame score aggregation at a video-level. Finally, we benchmark state of the art deep models for estimating pixel-level segmentations of COVID-19 imaging biomarkers. Experiments on the proposed dataset demonstrate satisfactory results on all the considered tasks, paving the way to future research on DL for the assisted diagnosis of COVID-19 from LUS data.
The majority of exchanges of oxygen and nutrients are performed around vessels smaller than 100 mm, allowing cells to thrive everywhere in the body. Pathologies such as cancer, diabetes and arteriosclerosis can profoundly alter the microvasculature. Unfortunately, medical imaging modalities only provide indirect observation at this scale. Inspired by optical microscopy, ultrasound localization microscopy has bypassed the classic compromise between penetration and resolution in ultrasonic imaging. By localization of individual injected microbubbles and tracking of their displacement with a subwavelength resolution, vascular and velocity maps can be produced at the scale of the micrometer. Super-resolution ultrasound has also been performed through signal fluctuations with the same type of contrast agents, or through switching on and off nano-sized phase-change contrast agents. These techniques are now being applied pre-clinically and clinically for imaging of the microvasculature of the brain, kidney, skin, tumors and lymph nodes.
Contrast enhanced ultrasound is a radiation-free imaging modality which uses encapsulated gas microbubbles for improved visualization of the vascular bed deep within the tissue. It has recently been used to enable imaging with unprecedented subwavelength spatial resolution by relying on super-resolution techniques. A typical preprocessing step in super-resolution ultrasound is to separate the microbubble signal from the cluttering tissue signal. This step has a crucial impact on the final image quality. Here, we propose a new approach to clutter removal based on robust principle component analysis (PCA) and deep learning. We begin by modeling the acquired contrast enhanced ultrasound signal as a combination of a low rank and sparse components. This model is used in robust PCA and was previously suggested in the context of ultrasound Doppler processing and dynamic magnetic resonance imaging. We then illustrate that an iterative algorithm based on this model exhibits improved separation of microbubble signal from the tissue signal over commonly practiced methods. Next, we apply the concept of deep unfolding to suggest a deep network architecture tailored to our clutter filtering problem which exhibits improved convergence speed and accuracy with respect to its iterative counterpart. We compare the performance of the suggested deep network on both simulations and in-vivo rat brain scans, with a commonly practiced deep-network architecture and the fast iterative shrinkage algorithm, and show that our architecture exhibits better image quality and contrast.
Deep learning is taking an ever more prominent role in medical imaging. This paper discusses applications of this powerful approach in ultrasound imaging systems along with domain-specific opportunities and challenges.ABSTRACT | We consider deep learning strategies in ultrasound systems, from the front-end to advanced applications. Our goal is to provide the reader with a broad understanding of the possible impact of deep learning methodologies on many aspects of ultrasound imaging. In particular, we discuss methods that lie at the interface of signal acquisition and machine learning, exploiting both data structure (e.g. sparsity in some domain) and data dimensionality (big data) already at the raw radio-frequency channel stage.As some examples, we outline efficient and effective deep learning solutions for adaptive beamforming and adaptive spectral Doppler through artificial agents, learn compressive encodings for color Doppler, and provide a framework for structured signal recovery by learning fast approximations of iterative minimization problems, with applications to clutter suppression and super-resolution ultrasound. These emerging technologies may have considerable impact on ultrasound imaging, showing promise across key components in the receive processing chain.
B-lines are ultrasound-imaging artifacts, which correlate with several lung-pathologies. However, their understanding and characterization is still largely incomplete. To further study B-lines, lung-phantoms were developed by trapping a layer of microbubbles in tissue-mimicking gel. To simulate the alveolar size reduction typical of various pathologies, 170 and 80 µm bubbles were used for phantom-type 1 and 2, respectively. A normal alveolar diameter is approximately 280 µm. A LA332 linear-array connected to the ULA-OP platform was used for imaging. Standard ultrasound (US) imaging at 4.5 MHz was performed. Subsequently, a multi-frequency approach was used where images were sequentially generated using orthogonal sub-bands centered at different frequencies (3, 4, 5, and 6 MHz). Results show that B-lines appear predominantly with phantom-type 2. Moreover, the multi-frequency approach revealed that the B-lines originate from a specific portion of the US spectrum. These results can give rise to significant clinical applications since, if further confirmed by extensive in-vivo studies, the native frequency of B-lines could provide a quantitative-measure of the state of the lung.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.