2021
DOI: 10.1016/j.media.2021.102137
|View full text |Cite
|
Sign up to set email alerts
|

AW3M: An auto-weighting and recovery framework for breast cancer diagnosis using multi-modal ultrasound

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 32 publications
(12 citation statements)
references
References 24 publications
0
12
0
Order By: Relevance
“…Gu et al 20 designed a DL model comprising multi‐fusion layers to obtain modal‐specific features and correlate information, respectively. Huang et al 21 developed a framework to utilize B‐mode, SWE‐mode, Doppler‐mode, and SE‐mode breast US images to assist breast cancer diagnosis. However, these methods require training the model for deep features and classifiers separately to achieve ensembled classification results.…”
Section: Related Workmentioning
confidence: 99%
“…Gu et al 20 designed a DL model comprising multi‐fusion layers to obtain modal‐specific features and correlate information, respectively. Huang et al 21 developed a framework to utilize B‐mode, SWE‐mode, Doppler‐mode, and SE‐mode breast US images to assist breast cancer diagnosis. However, these methods require training the model for deep features and classifiers separately to achieve ensembled classification results.…”
Section: Related Workmentioning
confidence: 99%
“…Note that some multi-modal models are closely related to this task as well. We therefore select two state-of-the-art works: AW3M [8] and AdaMML [15], as the former treats different branches differently, while the latter selects different modalities to perform classification for different patients. For ablations study, we also implement the proposed model without the PAWN and VACL (row 7, Tab.…”
Section: Materials and Experimentsmentioning
confidence: 99%
“…To address this issue, Zhang et al [37] proposed to use the Mean Squared Error (MSE) to align multimodal feature maps and designed a new contrastive loss to enforce the network to focus on the similarities of segmentation masks from paired modalities as well as dissimilarities of unpaired multi-modal data. Huang et al [38] proposed a SSL algorithm for four-modality ultrasound learning, where Mean Absolute Error across different modalities was minimized to ensure that high-level image features extracted from different modalities can be similar.…”
Section: B Ssl In Medical Imagingmentioning
confidence: 99%