2022
DOI: 10.1109/jbhi.2022.3161466
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Multimodal Fusion With Attention Guided Deep Supervision Net for Grading Hepatocellular Carcinoma

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(4 citation statements)
references
References 30 publications
0
4
0
Order By: Relevance
“…The cross-modal intra- and inter-attention module considers the intra- and inter-relation among modalities to harness the complementary information among different phases, which can help deep learning focus on the most informative region of each modality and select the most useful features for predictions [ 23 ]. Moreover, attention modules can provide attention weights that can be visualized and display salient regions to predictions, increasing the clinical interpretability of prediction models [ 21 , 24 ]. Therefore, the motivation of this study was to apply the attention-guided feature fusion network proposed by Li’s group [ 23 ] to develop MVI prediction models based on multi-phase MRI and to acquire visual explanations of predictions through the visualization of weights offered by attention modules.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…The cross-modal intra- and inter-attention module considers the intra- and inter-relation among modalities to harness the complementary information among different phases, which can help deep learning focus on the most informative region of each modality and select the most useful features for predictions [ 23 ]. Moreover, attention modules can provide attention weights that can be visualized and display salient regions to predictions, increasing the clinical interpretability of prediction models [ 21 , 24 ]. Therefore, the motivation of this study was to apply the attention-guided feature fusion network proposed by Li’s group [ 23 ] to develop MVI prediction models based on multi-phase MRI and to acquire visual explanations of predictions through the visualization of weights offered by attention modules.…”
Section: Methodsmentioning
confidence: 99%
“…The cros modal intra-and inter-attention module considers the intra-and inter-relation among m dalities to harness the complementary information among different phases, which ca help deep learning focus on the most informative region of each modality and select th most useful features for predictions [23]. Moreover, attention modules can provide atte tion weights that can be visualized and display salient regions to predictions, increasin the clinical interpretability of prediction models [21,24]. Therefore, the motivation of th…”
Section: Model Developmentmentioning
confidence: 99%
See 1 more Smart Citation
“…Cancer probability can be assessed by using the recognized features and their fusion. But, this task can be highly challenging, even for medical experts, because nodule presence and positive cancer diagnoses are not simply interrelated [9]. A computer-aided diagnoses (CAD) approach uses earlier analyzed features that are in some way associated with cancer suspicion, like shape, sphericity, volume, subtlety, speculation, solidity, etc.…”
Section: Introductionmentioning
confidence: 99%