Proceedings Computer Graphics International, 2004.
DOI: 10.1109/cgi.2004.1309266
|View full text |Cite
|
Sign up to set email alerts
|

Vision-based camera motion recovery for augmented reality

Abstract: We address the problem of tracking the 3D position and orientation of a camera, using the images it acquires while moving freely in unmodeled, arbitrary environments. This task has a broad spectrum of useful applications in domains such as augmented reality and video post production. Most of the existing methods for vision-based camera tracking are designed to operate in a batch, off-line mode, assuming that the whole video sequence to be tracked is available before tracking commences. Typically, such methods … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(8 citation statements)
references
References 26 publications
0
8
0
Order By: Relevance
“…A modern implementation of the Levenberg-Marquardt nonlinear least squares algorithm [3] running on a PC with OpenSUSE 10.2 Linux, Intel E6600 processor and 2GB of random access memory (RAM) was used for all numerical optimizations in this work. BIP and BIBOP have previously been shown to have symmetrical phase profiles from unconstrained optimization, so for this work half of the waveforms were optimized to save time with the second half being a mirror image of the first.…”
Section: Methodsmentioning
confidence: 99%
“…A modern implementation of the Levenberg-Marquardt nonlinear least squares algorithm [3] running on a PC with OpenSUSE 10.2 Linux, Intel E6600 processor and 2GB of random access memory (RAM) was used for all numerical optimizations in this work. BIP and BIBOP have previously been shown to have symmetrical phase profiles from unconstrained optimization, so for this work half of the waveforms were optimized to save time with the second half being a mirror image of the first.…”
Section: Methodsmentioning
confidence: 99%
“…However, the used control points have to be visible in every frame, which restricts the range of views in which augmentations can take place. The method in (Lourakis and Argyos, 2004) is able to recover the camera positions in close to real-time through the chaining of homographies computed from the tracking of 3D planes. The AR system proposed in (Chia et al, 2002) computes camera pose by using the epipolar constraints that exists between every video frame and two keyframes.…”
Section: Related Workmentioning
confidence: 99%
“…The first contribution concerns the instantaneous 3D motion estimation from image data, which can be useful for many applications in vision and robotics such as extrinsic calibration (Dornaika and Chung, 2008), visual servoing (Horaud et al, 1998), video indexing (Jasinschi et al, 2000), space robot localization (Johnson et al, 2007), and augmented reality (Lourakis and Argyros, 2004). What differentiates our work from existing ones is the use of image derivatives alone and not the optical flow field with a novel robust statistics solution.…”
Section: Introductionmentioning
confidence: 99%
“…Many algorithms have been proposed for estimating the 3D relative camera motions (discrete case) (Lourakis and Argyros, 2004) and the 3D velocity (differential case) (Baumela et al, 2000;Brooks et al, 1997;Rother and Carlesson, 2002).…”
Section: Introductionmentioning
confidence: 99%