2022
DOI: 10.1109/tcsvt.2021.3082939
|View full text |Cite
|
Sign up to set email alerts
|

Unified Information Fusion Network for Multi-Modal RGB-D and RGB-T Salient Object Detection

Abstract: Unified information fusion network for multi-modal RGB-D and RGB-T salient object detection. IEEE Transactions On Circuits and Systems for Video Technology.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
31
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 131 publications
(31 citation statements)
references
References 83 publications
0
31
0
Order By: Relevance
“…MIDD [62] proposes multi-interactive dual-decoder to integrate the multi-level interactions of dual modalities and global contexts. MMNet [63] simulates visual color stage doctrine to fuse cross-modal features in stages, and designs bi-directional multi-scale decoder to capture both local and global information. CGFNet [64] adopts the guidance manner of one modality on the other modality to fuse two modalities.…”
Section: B Rgb-t Salient Object Detectionmentioning
confidence: 99%
See 2 more Smart Citations
“…MIDD [62] proposes multi-interactive dual-decoder to integrate the multi-level interactions of dual modalities and global contexts. MMNet [63] simulates visual color stage doctrine to fuse cross-modal features in stages, and designs bi-directional multi-scale decoder to capture both local and global information. CGFNet [64] adopts the guidance manner of one modality on the other modality to fuse two modalities.…”
Section: B Rgb-t Salient Object Detectionmentioning
confidence: 99%
“…For RGB-D SOD, our model is compared with several SOTA RGB-D SOD algorithms, including D3Net [78], ASIF-Net [36], ICNet [89], DCMF [52], DRLF [90], SSF [43], SSMA [38], A2dele [46], UC-Net [91], JL-DCF [92], CoNet [44], DANet [81], EBFSP [93],CDNet [94], HAINet [95], RD3D [49], DSA2F [48], MMNet [63] and VST [6]. To ensure the fairness of the comparison results, the saliency maps of the evaluation are provided by the authors or generated by running source codes.…”
Section: Comparisons With Sotas 1) Rgb-d Sodmentioning
confidence: 99%
See 1 more Smart Citation
“…Despite some improvements, most feature fusion based RGB-D SOD models mentioned above mainly focus on capturing the complementary information within the multi-modality input images, while ignoring the impacts of image qualities on the representation ability of fused features, thus degrading the subsequent saliency detection performance. Recently, some studies have been carried out on the disturbing problem caused by the low-quality images [13], [19]- [21], [54]- [57]. For example, Zhao et al [19] designed a contrast enhancement module with contrast prior information to enhance the quality of depth images, thus boosting the saliency detection performance.…”
Section: B Rgb-d Salient Object Detectionmentioning
confidence: 99%
“…[56] modeled a task-orientated depth potentiality perception module to weaken the contamination from unreliable depth information. Gao et al [57] used the content-based spatial attention to select the important response of intra-modal information.…”
Section: B Rgb-d Salient Object Detectionmentioning
confidence: 99%