Comparison between a conventional open-loop registration approach (a) and our closed-loop registration approach employing both world-space pose refinement (b) and screen-space pixel-wise corrections (c). The dragon, the square & axes object and shadows are augmented. In (a), errors in estimates of camera intrinsics and extrinsics (6DOF pose) result in visible misregistration that is neither measured nor corrected as part of a conventional open-loop approach. Such registration errors include direct virtual object misregistration (e.g., the square & axes object), and "phantom" object misregistration errors including incorrect real-to-virtual occlusions (e.g., between the tower and the dragon) and associated shading effects between the real and the virtual (e.g., virtual shadow cast by the tower). See the "zoomed in" portions of the images. In our closed-loop approach, registration errors are detected using a model of the real scene, and corrected in both world space using camera pose refinement (b) and screen space using pixel-wise corrections (c) to address both rigid and non-rigid registration errors. The final result (c) is spatially accurate and exhibits visually coherent registration.
ABSTRACTIn Augmented Reality (AR), visible misregistration can be caused by many inherent error sources, such as errors in tracking, calibration, and modeling. In this paper we present a novel pixel-wise closed-loop registration framework that can automatically detect and correct registration errors using a reference model comprised of the real scene model and the desired virtual augmentations. Registration errors are corrected in both global world space via camera pose refinement, and local screen space via pixel-wise corrections, resulting in spatially accurate and visually coherent registration. Specifically we present a registration-enforcing model-based tracking approach that weights important image regions while refining the camera pose estimates (from any conventional tracking method) to achieve better registration, even in the case of modeling errors. To deal with remaining errors, which can be rigid or non-rigid, we compute the optical flow between the camera image and the real model image rendered with the refined pose, enabling direct screenspace pixel-wise corrections to misregistration. The estimated flow field can be applied to improve registration in two distinct ways: (1) forward warping of modeled on-real-object-surface augmentations (e.g., object re-texturing) into the camera image, leading to surface details that are not present in the virtual object; and (2) backward warping of the camera image into the real scene model, preserving the full use of the dense geometry buffer (depth in particular) pro- * e-mail: zhengf@cs.unc.edu † e-mail: schmalstieg@tugraz.at ‡ e-mail: welch@ucf.edu vided by the combined real-virtual model for registration, leading to pixel accurate real-virtual occlusion. We discuss the trade-offs between, and different use cases of, forward and backward warping with model-based tracking in t...