2020
DOI: 10.1007/978-3-030-58536-5_35
|View full text |Cite
|
Sign up to set email alerts
|

Mapillary Planet-Scale Depth Dataset

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 19 publications
(9 citation statements)
references
References 22 publications
0
9
0
Order By: Relevance
“…Cross-modal matching between the image and the map requires extracting semantic information from visual cues. It has been shown that monocular depth estimation can rely on semantic cues [3] and that both tasks have a beneficial synergy [29,34]. We thus rely on monocular inference to lift semantic features to the BEV space.…”
Section: Neural Bird's-eye View Inferencementioning
confidence: 99%
“…Cross-modal matching between the image and the map requires extracting semantic information from visual cues. It has been shown that monocular depth estimation can rely on semantic cues [3] and that both tasks have a beneficial synergy [29,34]. We thus rely on monocular inference to lift semantic features to the BEV space.…”
Section: Neural Bird's-eye View Inferencementioning
confidence: 99%
“…The use of crowdmapping has appeared in several research endeavors [22,23] as well as industrial applications (Waymo [24], Lyft [25], Uber [26], and Google [27]). A prime example is Mapillary [28]; a company that created a crowdsourcing platform, whose main purpose is to map out every corner of the world. Maintenance of the maps is performed using computer vision and artificial intelligence (AI).…”
Section: Crowdsourced Datamentioning
confidence: 99%
“…Off-theshelf SfM algorithms have limited success in reconstructing objects from Mapillary data. However, with appropriate calibration and tuning, the Mapillary Research team and collaborators had successfully demonstrated that scene reconstruction from Mapillary data is possible [51]. The research led to the publication of a set of 750,000 Mapillary images with depth information.…”
Section: Task 2: Scene Reconstructionmentioning
confidence: 99%
“…The research led to the publication of a set of 750,000 Mapillary images with depth information. Antequera et al [51] also showed that such a dataset can be used to train single-image-depth networks to predict depth information from individual images. The methodology and data introduced in [51] will potentially be the basis for scene reconstruction in this project.…”
Section: Task 2: Scene Reconstructionmentioning
confidence: 99%