Abstract-The evolution of an image sequence obtained by a real camera from a real scene can be conceptually separated into two parts: 1) motion of the camera and 2) motion of the objects in a scene. Most existing motion estimation algorithms use the block matching algorithm (BMA) to model both the camera motion and local motion due to the objects. In doing so, successive frames are divided into small blocks and the movement of each block is approximately modeled by a translation, thus resulting in one motion vector per block. In this paper, we propose two classes of algorithms for modeling camera motion in video sequences captured by a camera. The first class can be applied in situations where there is no camera translation and the motion of camera can be adequately modeled by zoom, pan, and rotation parameters. The second class is more general in that it can be applied to situations where the camera is undergoing a translational motion, as well as a rotation and zoom and pan. This class uses seven parameters to describe the motion of the camera and requires the depth map to be known at the receiver. The salient feature of both of our algorithms is that the camera motion is estimated using binary matching of the edges in successive frames. In doing so, we show that unlike local motion estimation, edge matching can be sufficient in estimating camera motion parameters. Finally, we compare the rate distortion characteristics of our algorithms with that of the B M A and show that we can achieve similar performance characteristics as BMA, with reduced computational complexity.