2023
DOI: 10.1016/j.engappai.2023.105919
|View full text |Cite
|
Sign up to set email alerts
|

RGB-T image analysis technology and application: A survey

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 26 publications
(2 citation statements)
references
References 341 publications
0
2
0
Order By: Relevance
“…In addition to using different sensors such as visible cameras and thermal cameras, special components like beam splitters for spatial alignment and synchronization timers for temporal alignment are required during data acquisition [1] . In recent years, researchers have proposed RGB-T object detection datasets that employ specially designed hardware and preprocessing methods to achieve pixel-level alignment and provide annotations shared between modalities [2][3] [4] . Most existing RGB-T image object detectors are built upon this inter-modal alignment [5] [6] .…”
Section: Introductionmentioning
confidence: 99%
“…In addition to using different sensors such as visible cameras and thermal cameras, special components like beam splitters for spatial alignment and synchronization timers for temporal alignment are required during data acquisition [1] . In recent years, researchers have proposed RGB-T object detection datasets that employ specially designed hardware and preprocessing methods to achieve pixel-level alignment and provide annotations shared between modalities [2][3] [4] . Most existing RGB-T image object detectors are built upon this inter-modal alignment [5] [6] .…”
Section: Introductionmentioning
confidence: 99%
“…Fully supervised semantic segmentation works [ 13 , 14 , 15 , 16 , 17 ] based on deep learning have achieved satisfactory results, even if they only use single-modal image as their model’s input. Despite the structure of deep networks being effective, it still has some limitations that the model needs a large number of annotated examples.…”
Section: Introductionmentioning
confidence: 99%