2022
DOI: 10.1002/btm2.10480
|View full text |Cite
|
Sign up to set email alerts
|

Deep learning‐based multimodal fusion network for segmentation and classification of breast cancers using B‐mode and elastography ultrasound images

Abstract: Ultrasonography is one of the key medical imaging modalities for evaluating breast lesions. For differentiating benign from malignant lesions, computer‐aided diagnosis (CAD) systems have greatly assisted radiologists by automatically segmenting and identifying features of lesions. Here, we present deep learning (DL)‐based methods to segment the lesions and then classify benign from malignant, utilizing both B‐mode and strain elastography (SE‐mode) images. We propose a weighted multimodal U‐Net (W‐MM‐U‐Net) mod… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 19 publications
(5 citation statements)
references
References 44 publications
(94 reference statements)
0
5
0
Order By: Relevance
“…The framework combines a decision network and an integrated feature network to learn multimodal features from B-mode and strain elastography mode when ultrasound images are supplied to multiple CNNs. The study uses multimodal EmbraceNet to fuse the features learnt using the CNN models 35 . A two-level machine learning technique had been applied to separately extract unimodal information from two different images sources to classify and then predict breast cancer.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The framework combines a decision network and an integrated feature network to learn multimodal features from B-mode and strain elastography mode when ultrasound images are supplied to multiple CNNs. The study uses multimodal EmbraceNet to fuse the features learnt using the CNN models 35 . A two-level machine learning technique had been applied to separately extract unimodal information from two different images sources to classify and then predict breast cancer.…”
Section: Related Workmentioning
confidence: 99%
“…Similarly, the use of a weakly supervised clustering-constrained attention multiple instance learning (CLAM) method has been used to overcome the problem of insufficient data in multimodal feature fusion 34 . Multimodal fusion framework (MFF) which relies on a weighted multimodal U-Net (WMM-UNet) have also been proposed with EmbraceNet used to ensemble multimodal features 35 . Another approach to the use of transfer learning in multimodal fusion learning was demonstrated using social engineering optimization with deep transfer learning on photoacoustic multimodal imaging (PAMI) 36 .…”
Section: Introductionmentioning
confidence: 99%
“…28 Misra et al introduced novel deep learning-based techniques for segmenting lesions and distinguishing between benign and malignant cases by analyzing B-mode and strain elastography images. 29 In another research direction, Yuan et al utilized machine learning methods to predict the permeation of drugs through the microneedletreated skin. 30…”
Section: Disease Diagnosismentioning
confidence: 99%
“…Tostado et al developed an AI‐assisted single‐cell phenomic‐transcriptomic platform, which showed great potential in elucidating mechanisms and therapeutic targets against immune response 28 . Misra et al introduced novel deep learning‐based techniques for segmenting lesions and distinguishing between benign and malignant cases by analyzing B‐mode and strain elastography images 29 . In another research direction, Yuan et al utilized machine learning methods to predict the permeation of drugs through the microneedle‐treated skin 30 …”
Section: Disease Diagnosismentioning
confidence: 99%
“…Recent CAD studies utilize various combination of modalities, including gene expression profile, digital pathology, MRI, ultrasound images, DBT, FFDM [1][2][3][4][5][6]. These multimodal fusion frameworks can be divided into pixel fusion stage, feature fusion and decision fusion.…”
Section: Introductionmentioning
confidence: 99%