2021
DOI: 10.48550/arxiv.2108.10831
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

LLVIP: A Visible-infrared Paired Dataset for Low-light Vision

Abstract: cn https://bupt-ai-cz.github.io/LLVIP Figure 1. Samples of the LLVIP. Top: infrared images. Bottom: visible images. Each column represents a visible-infrared image pair.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 22 publications
0
2
0
Order By: Relevance
“…The combined information improves the robot's field of view in the environment. This integration strives to capitalize on the strengths of both sensor types [32,35], overcoming the limitations posed by LiDAR's temporal resolution while leveraging its wide field of view for robust obstacle detection and avoidance.…”
Section: Sensor Fusionmentioning
confidence: 99%
See 1 more Smart Citation
“…The combined information improves the robot's field of view in the environment. This integration strives to capitalize on the strengths of both sensor types [32,35], overcoming the limitations posed by LiDAR's temporal resolution while leveraging its wide field of view for robust obstacle detection and avoidance.…”
Section: Sensor Fusionmentioning
confidence: 99%
“…This approach includes capturing a wide range of obstacles found in MRO hangars, including tools, vehicles, and structural elements in the real world and in simulation, and using these datasets to train the model. LLVIP [35] public datasets were also integrated to expand the capability of recognizing objects in diverse scenarios considering low-light conditions, which are common in certain areas of the MRO hangar or during specific times of the day. For data fusion, the DenseFuse network was first used to fuse RGB and thermal images from real-world and LLVIP datasets to expand the model's capability to accurately detect obstacles in low-light conditions.…”
Section: Obstacle Detection Modulementioning
confidence: 99%