2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2020
DOI: 10.1109/iros45743.2020.9341243
|View full text |Cite
|
Sign up to set email alerts
|

Crowdsourced 3D Mapping: A Combined Multi-View Geometry and Self-Supervised Learning Approach

Abstract: The ability to efficiently utilize crowd-sourced visual data carries immense potential for the domains of large scale dynamic mapping and autonomous driving. However, state-of-the-art methods for crowdsourced 3D mapping assume prior knowledge of camera intrinsics. In this work we propose a framework that estimates the 3D positions of semantically meaningful landmarks such as traffic signs without assuming known camera intrinsics, using only monocular color camera and GPS. We utilize multi-view geometry as well… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1
1

Relationship

3
3

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 43 publications
0
6
0
Order By: Relevance
“…To further evaluate our proposed system, we empirically study the impact of the number of turns used, as well as the frame rate of the videos on the calibration performance. We show that our system is superior to (Santana-Cedrés et al, 2017) as well as (Chawla et al, 2020a;Chawla et al, 2020b), which in turn outperforms the self-supervised (Gor-don et al, 2019). Finally, we demonstrate the application of our system for chessboard-free accurate monocular dense depth and ego-motion estimation on uncalibrated videos.…”
Section: Methodsmentioning
confidence: 58%
See 1 more Smart Citation
“…To further evaluate our proposed system, we empirically study the impact of the number of turns used, as well as the frame rate of the videos on the calibration performance. We show that our system is superior to (Santana-Cedrés et al, 2017) as well as (Chawla et al, 2020a;Chawla et al, 2020b), which in turn outperforms the self-supervised (Gor-don et al, 2019). Finally, we demonstrate the application of our system for chessboard-free accurate monocular dense depth and ego-motion estimation on uncalibrated videos.…”
Section: Methodsmentioning
confidence: 58%
“…However, their applicability is constrained by the variety of images with different combinations of ground truth camera parameters used in training. On the otherhand self-supervised methods (Gordon et al, 2019) do not achieve similar performance (Chawla et al, 2020b). SfM has also been utilized to estimate camera parameters from a crowdsourced collection of images (Schonberger and Frahm, 2016).…”
Section: Related Workmentioning
confidence: 99%
“…However, the process of camera calibration typically demands expertise, relying on experimentation and calculations to acquire the camera's intrinsic and extrinsic parameters. Therefore, some researchers have proposed HD map-construction methods that do not rely on the intrinsic and extrinsic parameters of cameras or smartphones [36][37][38]. Chawla et al proposed a method for extracting 3D positions of landmarks in HD maps [36].…”
Section: Camera-based Data-collection Methodsmentioning
confidence: 99%
“…Moreover, self-supervised monocular depth estimation still requires prior knowledge of the camera intrinsics (focal length and principal point) during training, which may be different for each data source, may change over time, or be unknown a priori (Chawla et al, 2020). While multiple approaches to supervised camera intrinsics estimation have been proposed (Lopez et al, 2019;Zhuang et al, 2019), not many self-supervised approaches exist (Gordon et al, 2019).…”
Section: Related Workmentioning
confidence: 99%