2022
DOI: 10.20944/preprints202209.0276.v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Towards Interpretable Camera and LiDAR data fusion for Unmanned Autonomous Vehicles Localisation

Abstract: Recent deep learning frameworks draw a strong research interest in the application of ego-motion estimation as they demonstrate a superior result compared to geometric approaches. However, due to the lack of multimodal datasets, most of these studies primarily focused on a single sensor-based estimation. To overcome this challenge, we collect a unique multimodal dataset named LboroAV2, using multiple sensors including camera, Light Detecting And Ranging (LiDAR), ultrasound, e-compass and rotary encoder. We als… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
references
References 38 publications
0
0
0
Order By: Relevance