We introduce an extension of bag-of-words image representations to encode spatial layout. Using the Fisher kernel framework we derive a representation that encodes the spatial mean and the variance of image regions associated with visual words. We extend this representation by using a Gaussian mixture model to encode spatial layout, and show that this model is related to a soft-assign version of the spatial pyramid representation. We also combine our representation of spatial layout with the use of Fisher kernels to encode the appearance of local features. Through an extensive experimental evaluation, we show that our representation yields state-of-the-art image categorization results, while being more compact than spatial pyramid representations. In particular, using Fisher kernels to encode both appearance and spatial layout results in an image representation that is computationally efficient, compact, and yields excellent performance while using linear classifiers.
Figure 1. Top-ranked images for the query 'eiffel tower', using (a) a text-based image search engine and (b) our model, ordered left-to-right. AbstractWeb image search using text queries has received considerable attention. However, current state-of-the-art approaches require training models for every new query, and are therefore unsuitable for real-world web search applications. The key contribution of this paper is to introduce generic classifiers that are based on query-relative features which can be used for new queries without additional training. They combine textual features, based on the occurence of query terms in web pages and image meta-data, and visual histogram representations of images. The second contribution of the paper is a new database for the evaluation of web image search algorithms. It includes 71478 images returned by a web search engine for 353 different search queries, along with their meta-data and ground-truth annotations. Using this data set, we compared the image ranking performance of our model with that of the search engine, and with an approach that learns a separate classifier for each query. Our generic models that use query-relative features improve significantly over the raw search engine ranking, and also outperform the query-specific models.
State-of-the-art approaches for text classification leverage a transformer architecture with a linear layer on top that outputs a class distribution for a given prediction problem. While effective, this approach suffers from conceptual limitations that affect its utility in few-shot or zero-shot transfer learning scenarios. First, the number of classes to predict needs to be pre-defined. In a transfer learning setting, in which new classes are added to an already trained classifier, all information contained in a linear layer is therefore discarded, and a new layer is trained from scratch. Second, this approach only learns the semantics of classes implicitly from training examples, as opposed to leveraging the explicit semantic information provided by the natural language names of the classes. For instance, a classifier trained to predict the topics of news articles might have classes like "business" or "sports" that themselves carry semantic information. Extending a classifier to predict a new class named "politics" with only a handful of training examples would benefit from both leveraging the semantic information in the name of a new class and using the information contained in the already trained linear layer. This paper presents a novel formulation of text classification that addresses these limitations. It imbues the notion of the task at hand into the transformer model itself by factorizing arbitrary classification problems into a generic binary classification problem. We present experiments in few-shot and zero-shot transfer learning that show that our approach significantly outperforms previous approaches on small training data and can even learn to predict new classes with no training examples at all.
The paper presents an approach to multimodal image registration. The method is developed for aligning infrared (IR) and visual (RGB) images of facades. It is based on mapping clouds of points extracted by a corner detector applied to both images. The experiments show that corners are suitable features for our application. In the alignment process a number of transformation hypotheses is generated and evaluated. The evaluation is performed by measuring similarity between the RGB corners and the transformed corners from IR image. Directed partial Hausdorff distance is used as a robust similarity measure. The implemented system has been tested on various IR-RGB pairs of images of buildings. The results show that the method can be used for image registration, but also expose some typical problems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.