Interest point descriptors (e.g. Scale Invariant Feature Transform, SIFT or Speeded-Up Robust Features, SURF) are often used both for classic image processing tasks (e.g. mosaic generation) or higher level machine learning tasks (e.g. segmentation or classification). Hyperspectral images are recently gaining popularity as a potent data source for scene analysis, material identification, anomaly detection or process state estimation. The structure of hyperspectral images is much more complex than traditional color or monochrome images, as they comprise of a large number of bands, each corresponding to a narrow range of frequencies. Because of varying image properties across bands, the application of interest point descriptors to them is not straightforward. To the best of our knowledge, there has been, to date, no study of performance of interest point descriptors on hyperspectral images that simultaneously integrate a number of methods and use a dataset with significant geometric transformations. Here, we study four popular methods (SIFT, SURF, BRISK, ORB) applied to complex scene recorded from several viewpoints. We presents experimental results by observing how well the methods estimate the 3D cameras' positions, which we propose as a general performance measure.