2022
DOI: 10.3389/frobt.2022.801886
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Session Visual SLAM for Illumination-Invariant Re-Localization in Indoor Environments

Abstract: For robots navigating using only a camera, illumination changes in indoor environments can cause re-localization failures during autonomous navigation. In this paper, we present a multi-session visual SLAM approach to create a map made of multiple variations of the same locations in different illumination conditions. The multi-session map can then be used at any hour of the day for improved re-localization capability. The approach presented is independent of the visual features used, and this is demonstrated b… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(2 citation statements)
references
References 49 publications
0
2
0
Order By: Relevance
“…The research was conducted in the field in July 2022, and SLAM (Simultaneous Localization and Mapping) laser scanning and photographic survey. The SLAM laser scanning was conducted with an ipad Pro, using the inner laser-sensor and the open-source application RTAB-Map (Real-Time Appearance-Based Mapping) developed by a French research team in photogrammetry and robotic for 3D mapping [17,18] and available on github. The photographic survey is composed of 178 photographs taken at the site for SfM-MVS purposes, with a handheld Sony RX100 III Cybershot camera, which is mounted with a Zeiss Vario-Sonnar lense.…”
Section: Methodsmentioning
confidence: 99%
“…The research was conducted in the field in July 2022, and SLAM (Simultaneous Localization and Mapping) laser scanning and photographic survey. The SLAM laser scanning was conducted with an ipad Pro, using the inner laser-sensor and the open-source application RTAB-Map (Real-Time Appearance-Based Mapping) developed by a French research team in photogrammetry and robotic for 3D mapping [17,18] and available on github. The photographic survey is composed of 178 photographs taken at the site for SfM-MVS purposes, with a handheld Sony RX100 III Cybershot camera, which is mounted with a Zeiss Vario-Sonnar lense.…”
Section: Methodsmentioning
confidence: 99%
“…Finally in our survey, we shall consider a multi-faceted field of computer vision called illumination invariance. The research of this topic covers diverse tasks, such as object detection [65], SLAM [36], and face recognition [86], but the essential question remains roughly the same across them: how can we extract the same information under illumination changes. While the question may very well seem to cover the specularities, the field is more focused on seasonal, daily or situational human-made changes in illumination rather than specularities, which are a direct effect of the prevailing illumination while not being a source of illumination or a change in the sources per se.…”
Section: Other Fieldsmentioning
confidence: 99%

Visual semantic navigation with real robots

Gutiérrez-Álvarez,
Ríos-Navarro,
Flor-Rodríguez-Rabadán
et al. 2024
Appl Intell