2012 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) 2012
DOI: 10.1109/ismar.2012.6402531
|View full text |Cite
|
Sign up to set email alerts
|

Wide-area scene mapping for mobile visual tracking

Abstract: We propose a system for easily preparing arbitrary wide-area environments for subsequent real-time tracking with a handheld device. Our system evaluation shows that minimal user effort is required to initialize a camera tracking session in an unprepared environment. We combine panoramas captured using a handheld omnidirectional camera from several viewpoints to create a point cloud model. After the offline modeling step, live camera pose tracking is initialized by feature point matching, and continuously updat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
23
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 55 publications
(23 citation statements)
references
References 25 publications
0
23
0
Order By: Relevance
“…The matching is distributed over several frames to keep processing times low. Ventura and Höllerer [29] propose to query an image-based localization server to estimate the camera pose of the first image, allowing subsequent real-time pose tracking relative to the server-side 3D model, which is also kept locally on the mobile device. To efficiently match features in the following images to this model, the pose of the previous image is used to cull 3D points behind the camera and outside the image.…”
Section: Overall Approachmentioning
confidence: 99%
See 1 more Smart Citation
“…The matching is distributed over several frames to keep processing times low. Ventura and Höllerer [29] propose to query an image-based localization server to estimate the camera pose of the first image, allowing subsequent real-time pose tracking relative to the server-side 3D model, which is also kept locally on the mobile device. To efficiently match features in the following images to this model, the pose of the previous image is used to cull 3D points behind the camera and outside the image.…”
Section: Overall Approachmentioning
confidence: 99%
“…Nevertheless, the flexibility of mobile devices creates a strong need for largescale mobile localization, e.g., for city-scale augmented reality. Recently, Lim et al [22] and Ventura and Höllerer [29] proposed approaches for real-time, largescale mobile pose tracking. However, both approaches require to keep a 3D model of the environment on the device, which can already consume more than 100MB for a scene of size 8m × 5m [22], limiting their applicability for mobile devices with their hard memory restrictions.…”
Section: Introductionmentioning
confidence: 99%
“…More scalable approaches assume the availability of an external server, which sends the relevant model parts to the device [4] or even performs the actual localization [32,51,52]. The latter methods run SLAM on the device, enabling them to handle the latency of transmitting images to the server and to track the pose for periods where localization against the global model fails [32,44,52].…”
Section: Introduction and Related Workmentioning
confidence: 99%
“…Since potential inaccuracies of the individual sensors add up, the positioning accuracy of this location-based approach is usually lower. There are some current research efforts such as the one presented by Ventura and Höllerer [16] that improve the positioning accuracy, but they often require explicit user input, special sensors or additional information.…”
Section: Related Workmentioning
confidence: 99%