Abstract. We present a new active vision technique called zoom tracking. Zoom tracking is the continuous adjustment of a camera's focal length in order to keep a constant-sized image of an object moving along the camera's optical axis. Two methods for performing zoom tracking are presented: a closed-loop visual feedback algorithm based on optical flow, and use of depth information obtained from an autofocus camera's range sensor. We explore two uses of zoom tracking: recovery of depth information and improving the performance of scale-variant algorithms. We show that the image stability provided by zoom tracking improves the performance of algorithms that are scale variant, such as correlation-based trackers. While zoom tracking cannot totally compensate for an object's motion, due to the effect of perspective distortion, an analysis of this distortion provides a quantitative estimate of the performance of zoom tracking. Zoom tracking can be used to reconstruct a depth map of the tracked object. We show that under normal circumstances this reconstruction is much more accurate than depth from zooming, and works over a greater range than depth from axial motion while providing, in the worst case, only slightly less accurate results. Finally, we show how zoom tracking can also be used in time-to-contact calculations.
In this paper we present a new active vision technique called zoom tracking. Zoom tracking is the continuous adjustment of a camera's focal length, to keep a constantsized image of an object moving along the camera's optical axis. Two methods for performing zoom tracking are presented: a closed-loop visual feedback algorithm based on opticaljlow, and use of depth information obtained from an autofocus camera's range sensor We show that the image stablityprovided by zoom tracking improves the performance of algorithms that are scale varient, such as correlation-based trackers. While zoom tracking cannot totally compensate an object's motion, due to the effect ofperspective distortion, an analysis of this distortion provides a quantitative estimate of the performance of zoom tracking.plane. All object points not on this plane will undergo perspective distortion. We provide an analysis and quantification of perspective distortion deriving an upper bound on the residual error of the zoom tracking process.Our experiments demonstrate the effect that zoom tracking has on a typically scale varient alogorithm: template matching.The remainder of the paper is organized as follows. Related work is reviewed in section 2. In Section 3 we discuss the motion model, imaging model and optical flow field used in our work. The necessary equations for focal length control are provided in section 4. In Section 5, we derive an upper bound on the error induced by perspective distortion. In section 6, we present experiments and conclusions are given in section 7.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.