2018 IEEE International Conference on Robotics and Automation (ICRA) 2018
DOI: 10.1109/icra.2018.8462894
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Training for Adverse Conditions: Robust Metric Localisation Using Appearance Transfer

Abstract: We present a method of improving visual place recognition and metric localisation under very strong appearance change. We learn an invertable generator that can transform the conditions of images, e.g. from day to night, summer to winter etc. This image transforming filter is explicitly designed to aid and abet feature-matching using a new loss based on SURF detector and dense descriptor maps. A network is trained to output synthetic images optimised for feature matching given only an input RGB image, and thes… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
73
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 102 publications
(73 citation statements)
references
References 32 publications
0
73
0
Order By: Relevance
“…This concept of robustness can be too restrictive as different localisation algorithms can have different failure modes. [7] showed the robustness performance of their algorithms using a similar measure of probability of absence of updates metric proposed by our paper. This paper does not however provide a detailed explanation or focus on this particular metric.…”
Section: Related Workmentioning
confidence: 71%
“…This concept of robustness can be too restrictive as different localisation algorithms can have different failure modes. [7] showed the robustness performance of their algorithms using a similar measure of probability of absence of updates metric proposed by our paper. This paper does not however provide a detailed explanation or focus on this particular metric.…”
Section: Related Workmentioning
confidence: 71%
“…Style-Transfer: Other approaches attempt to directly train computer vision models using synthetic data generated via style-transfer, or to directly adapt the input data to the target domain. Notable approaches include those of [4], [5], [23], [3] and [2]. Generally, these methods seem to have the most promise of reducing the domain gap between real and synthetic images, hence our decision to generate training data using the approach of [24].…”
Section: B Domain Adaptationmentioning
confidence: 99%
“…As a type of domain adaptation technique, domain unification is the holy grail of visual perception, theoretically allowing models trained on samples with limited heterogeneity to perform adequately on scenes that are well out of the distribution of the training data. Domain unification can be applied within the vast distribution of natural images [1], [2], [3], between natural and synthetic images (computer-generated, whether through traditional 3D rendering or more modern GAN-based techniques) [4], [5] and even between different sensor modalities [6]. Additionally, domain unification can be implemented at different stages of a computer vision pipeline, ranging from direct approaches such as domain confusion [7], [8], [9], fine-tuning models on target domains [1] or mixture-of-expert approaches [10], etc.…”
Section: Introductionmentioning
confidence: 99%
“…In the last years, researchers have tackled the multipledaytime tracking challenge with deep learning approaches that are designed to convert nighttime images to daytime images e.g. using GANs [6], [7], [8]. While this improves the robustness to changing lighting, one may ask why images should be the best input representation.…”
Section: Introductionmentioning
confidence: 99%