“…Whence, there is the need of new and specific retrieval systems for "mobile images", coping both with networking and hardware limitations of mobile devices. For instance, Yang and Qian [200] propose a novel approach that leverages on the fact that often users take multiple shots of a given scene; hence, the mobile device is first searched for photos visually similar to the query image. Then, the latter, together with the relevant photos found in the device, is used to mine "visual salient words", which are ordered according to their contribution in order to reduce the noise and the computational complexity of spatial verification.…”