This paper presents a novel 3D-model-based computer-vision method for tracking the full six degreeof-freedom (dof) pose (position and orientation) of a rigid body, in real-time. The methodology has been targeted for autonomous navigation tasks, such as interception of or rendezvous with mobile targets. Tracking an object's complete six-dof pose makes the proposed algorithm useful even when targets are not restricted to planar motion (e.g., flying or rough-terrain navigation). Tracking is achieved via a combination of textured model projection and optical flow. The main contribution of our work is the novel combination of optical flow with z-buffer depth information that is produced during model projection. This allows us to achieve six-dof tracking with a single camera.A localized illumination normalization filter also has been developed in order to improve robustness to shading. Real-time operation is achieved using GPU-based filters and a new data-reduction algorithm based on colour-gradient redundancy, which was developed within the framework of our project. Colour-gradient redundancy is an important property of colour images, namely, that the gradients of all colour channels are generally aligned. Exploiting this property provides a threefold increase in speed. A processing rate of approximately 80 to 100 fps has been obtained in our work when utilizing synthetic and real target-motion sequences. Sub-pixel accuracies were obtained in tests performed under different lighting conditions.