Ultrasonography is one of the key medical imaging modalities for evaluating breast lesions. For differentiating benign from malignant lesions, computer‐aided diagnosis (CAD) systems have greatly assisted radiologists by automatically segmenting and identifying features of lesions. Here, we present deep learning (DL)‐based methods to segment the lesions and then classify benign from malignant, utilizing both B‐mode and strain elastography (SE‐mode) images. We propose a weighted multimodal U‐Net (W‐MM‐U‐Net) model for segmenting lesions where optimum weight is assigned on different imaging modalities using a weighted‐skip connection method to emphasize its importance. We design a multimodal fusion framework (MFF) on cropped B‐mode and SE‐mode ultrasound (US) lesion images to classify benign and malignant lesions. The MFF consists of an integrated feature network (IFN) and a decision network (DN). Unlike other recent fusion methods, the proposed MFF method can simultaneously learn complementary information from convolutional neural networks (CNNs) trained using B‐mode and SE‐mode US images. The features from the CNNs are ensembled using the multimodal EmbraceNet model and DN classifies the images using those features. The experimental results (sensitivity of 100 ± 0.00% and specificity of 94.28 ± 7.00%) on the real‐world clinical data showed that the proposed method outperforms the existing single‐ and multimodal methods. The proposed method predicts seven benign patients as benign three times out of five trials and six malignant patients as malignant five out of five trials. The proposed method would potentially enhance the classification accuracy of radiologists for breast cancer detection in US images.
Three-dimensional (3D) handheld photoacoustic (PA) and ultrasound (US) imaging performed using mechanical scanning are more useful than conventional 2D PA/US imaging for obtaining local volumetric information and reducing operator dependence. In particular, 3D multispectral PA imaging can capture vital functional information, such as hemoglobin concentrations and hemoglobin oxygen saturation (sO2), of epidermal, hemorrhagic, ischemic, and cancerous diseases. However, the accuracy of PA morphology and physiological parameters is hampered by motion artifacts during image acquisition. The aim of this paper is to apply appropriate correction to remove the effect of such motion artifacts. We propose a new motion compensation method that corrects PA images in both axial and lateral directions based on structural US information. 3D PA/US imaging experiments are performed on a tissue-mimicking phantom and a human wrist to verify the effects of the proposed motion compensation mechanism and the consequent spectral unmixing results. The structural motions and sO2 values are confirmed to be successfully corrected by comparing the motion-compensated images with the original images. The proposed method is expected to be useful in various clinical PA imaging applications (e.g., breast cancer, thyroid cancer, and carotid artery disease) that are susceptible to motion contamination during multispectral PA image analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.