Camera motion information is one aspect that helps to infer higher-level semantic descriptions in many video applications, e.g., in video retrieval. However, an efficient methodology for annotating camera motion information is still an elusive goal. Here we propose and present a novel and efficient approach for the task of partitioning a video document into sub-shots and characterizing their camera motion. By leveraging location (GPS) and digital compass data, which are available from most current smartphone handsets, we exploit the geographical sensor information to detect transitions between two sub-shots based on the variations of both the camera location and the shooting direction. The advantage of our method lies in its considerable accuracy. Additionally, the computational efficiency of our scheme enables it to be deployed on mobile devices and to process videos while recording. We utilize this capability to show how the HEX motion estimation algorithm in the H.264/AVC encoder can be simplified with the aid of our camera motion information. Our experimental results show that we can reduce the computation of the HEX algorithm by up to 50% while achieving comparable video quality.