2023
DOI: 10.1109/jbhi.2023.3264819
|View full text |Cite
|
Sign up to set email alerts
|

An Improved Hybrid Network With a Transformer Module for Medical Image Fusion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2025
2025

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 15 publications
(1 citation statement)
references
References 47 publications
0
1
0
Order By: Relevance
“…Specifically, the hybrid transformer employs the fine-grained attention module to generate global features by exploring long-range dependencies, while the DHRNet is responsible for local information processing. Liu et al [ 32 ] have used a CNN and Transformer module to build the extraction network and the decoder network. Besides, they have designed a self-adaptive weighted rule for image fusion.…”
Section: Related Workmentioning
confidence: 99%
“…Specifically, the hybrid transformer employs the fine-grained attention module to generate global features by exploring long-range dependencies, while the DHRNet is responsible for local information processing. Liu et al [ 32 ] have used a CNN and Transformer module to build the extraction network and the decoder network. Besides, they have designed a self-adaptive weighted rule for image fusion.…”
Section: Related Workmentioning
confidence: 99%