2023
DOI: 10.1016/j.asoc.2022.109857
|View full text |Cite
|
Sign up to set email alerts
|

AEFusion: A multi-scale fusion network combining Axial attention and Entropy feature Aggregation for infrared and visible images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(15 citation statements)
references
References 52 publications
0
15
0
Order By: Relevance
“…To comprehensively evaluate the proposed method, we perform qualitative and quantitative experiments on the MSRS dataset [ 7 ] with 361 image pairs, the LLVIP dataset [ 48 ] with randomly selected 389 image pairs, and the TNO dataset [ 49 ] with randomly selected 16 image pairs. We compare our method with eight state-of-the-art (SOTA) approaches, including DenseFuse [ 24 ], FusionGAN [ 26 ], SwinFuse [ 29 ], U2Fusion [ 50 ], AUIF [ 51 ], CUFD [ 23 ], MUFusion [ 52 ], and AEFusion [ 53 ]. The implementations of these approaches are publicly available and the initial parameters of the compared methods remain the same.…”
Section: Experimental Results and Analysismentioning
confidence: 99%
“…To comprehensively evaluate the proposed method, we perform qualitative and quantitative experiments on the MSRS dataset [ 7 ] with 361 image pairs, the LLVIP dataset [ 48 ] with randomly selected 389 image pairs, and the TNO dataset [ 49 ] with randomly selected 16 image pairs. We compare our method with eight state-of-the-art (SOTA) approaches, including DenseFuse [ 24 ], FusionGAN [ 26 ], SwinFuse [ 29 ], U2Fusion [ 50 ], AUIF [ 51 ], CUFD [ 23 ], MUFusion [ 52 ], and AEFusion [ 53 ]. The implementations of these approaches are publicly available and the initial parameters of the compared methods remain the same.…”
Section: Experimental Results and Analysismentioning
confidence: 99%
“…2 H W m \left.\right)$ by calculating the axial attention on the H and W axes, respectively. [ 17 ] The axial attention in the widthaxis (high‐axis) is shown in Figure 4b. F v i s 1 $F_{v i s}^{1}$ is processed by axial attention mechanism to obtain F A X I $F_{A X I}$ C × H × W $\in �?^{C \times H \times W}$ .…”
Section: Proposed Methodsmentioning
confidence: 99%
“…This reduces the complexity from OðHWm 2 Þ to Oð2HWmÞ by calculating the axial attention on the H and W axes, respectively. [17] The axial attention in the widthaxis (high-axis) is shown in Figure 4b. F 1 vis is processed by axial attention mechanism to obtain F AXI ∈ ℝ CÂHÂW .…”
Section: Axial-attention Modulementioning
confidence: 99%
See 1 more Smart Citation
“…All mentioned MARL algorithms have been implemented in environments such as StarCraft II [32] or other benchmark problems with a single common reward signal. On the other hand, AIIR-MIX in [33] is an extended QMIX in which agents consider self-generated internal rewards based on state information in addition to external rewards. This corresponds to the goal of influencing the behavior of individual agents or a group of agents in a multi-agent system, which COMA and Consensus-oriented Strategy (CoS) [34] also pursue.…”
Section: Multi Agent Reinforcement Learningmentioning
confidence: 99%