Nowadays, mobile robots have become a useful tool that permits solving a wide range of applications. Their importance lies in their ability to move autonomously through unknown environments and to adapt to changing conditions. To this end, the robot must be able to build a model of the environment and to estimate its position using the information captured by the different sensors it may be equipped with. Omnidirectional vision sensors have become a robust option thanks to the richness of the data they capture. These data must be analysed to extract relevant information that permits estimating the position of the robot taking into account the number of degrees of freedom it has. In this work, several methods to estimate the relative height of a mobile robot are proposed and evaluated. The framework we present is based on the global appearance of the scenes, which has emerged as an efficient and robust alternative comparing to methods based on local features. All the algorithms have been tested with some sets of images captured under real working conditions in several indoor and outdoor spaces. The results prove that global appearance descriptors provide a feasible alternative to estimate topologically the relative altitude of the robot.