2021
DOI: 10.1109/lsp.2021.3092967
|View full text |Cite
|
Sign up to set email alerts
|

MRINet: Multilevel Reverse-Context Interactive-Fusion Network for Detecting Salient Objects in RGB-D Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 15 publications
(12 citation statements)
references
References 36 publications
0
12
0
Order By: Relevance
“…Including visual spatial executive ability, abstraction ability, orientation, language, memory, naming, attention and other cognitive areas, a total of 30 points, patients with a score <23 points are identified as having cognitive dysfunction [ 9 ]. All the procedures are evaluated by the same physician in the department of Neurology of our hospital [ 10 12 ].…”
Section: The Experimental Methodsmentioning
confidence: 99%
“…Including visual spatial executive ability, abstraction ability, orientation, language, memory, naming, attention and other cognitive areas, a total of 30 points, patients with a score <23 points are identified as having cognitive dysfunction [ 9 ]. All the procedures are evaluated by the same physician in the department of Neurology of our hospital [ 10 12 ].…”
Section: The Experimental Methodsmentioning
confidence: 99%
“…According to the prediction evaluation, the proposed CNN-RF method is compared to some newly released mechanisms to detect damaged power lines with UAV and the IoT technologies including Convolutional Neural Network (CNN) [20], CNN [37] and Support Vector Machine (CNN-SVM) [23], Focal Phi Loss (FPL) [21,38], and convolutional features and structured constraints (CFSC) [25,39].…”
Section: Resultsmentioning
confidence: 99%
“…2, our proposed architecture has three streams, including one main stream in the middle row (with orange background) and two auxiliary streams in the top and bottom rows (with green background). Inspired by [1], we concatenate paired RGB and thermal images and feed them into main stream to extract combination features. Different from them, since the fusion process requires to fuse features extracted from the same stage, we use three VGG16 [2] networks as the backbone.…”
Section: A Network Architecturementioning
confidence: 99%