2020
DOI: 10.48550/arxiv.2003.03026
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

DA4AD: End-to-End Deep Attention-based Visual Localization for Autonomous Driving

Abstract: We present a visual localization framework aided by novel deep attention aware features for autonomous driving that achieves centimeter level localization accuracy. Conventional approaches to the visual localization problem rely on handcrafted features or human-made objects on the road. They are known to be either prone to unstable matching caused by severe appearance or lighting changes, or too scarce to deliver constant and robust localization results in challenging scenarios. In this work, we seek to exploi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 69 publications
0
1
0
Order By: Relevance
“…Salient DSO [20] combines a saliency model and image segmentation model with Direct Sparse Odometry (DSO) [21]. Zhou et al [22] utilize saliency information to extract and match features. This method enhances the performance of DSO.…”
Section: Related Workmentioning
confidence: 99%
“…Salient DSO [20] combines a saliency model and image segmentation model with Direct Sparse Odometry (DSO) [21]. Zhou et al [22] utilize saliency information to extract and match features. This method enhances the performance of DSO.…”
Section: Related Workmentioning
confidence: 99%